How to configure docker-compose.yml to use passbolt with docker-compose? - docker

I use docker with WSL2 on a Debian VM and i'm trying to install passbolt.
I follow the steps on this guide : https://help.passbolt.com/hosting/install/ce/docker.html.
When i run docker-compose up, it's working and i can reach the database with telnet but it's impossible to reach the instance of passbolt with telnet and with my browser.
It's strange because the two containers: mariadb and passbolt are running.
This is my docker-compose.yml:
version: '3.4'
services:
db:
image: mariadb:10.3
env_file:
- env/mysql.env
volumes:
- database_volume:/var/lib/mysql
ports:
- "127.0.0.1:3306:3306"
passbolt:
image: passbolt/passbolt:latest-ce
#Alternatively you can use rootless:
#image: passbolt/passbolt:latest-ce-non-root
tty: true
container_name: passbolt
restart: always
depends_on:
- db
env_file:
- env/passbolt.env
volumes:
- gpg_volume:/etc/passbolt/gpg
- images_volume:/usr/share/php/passbolt/webroot/img/public
command: ["/usr/bin/wait-for.sh", "-t", "0", "db:3306", "--", "/docker-entrypoint.sh"]
ports:
- 80:80
- 443:443
#Alternatively for non-root images:
# - 80:8080
# - 443:4433
volumes:
database_volume:
gpg_volume:
images_volume:
If anybody can help me, thanks!

Your docker-compose file looks quite ordinary and I don't see any issues.
Can you please attach your passbolt.env and mysql.env (remove any important information ofcourse).
Also, the passbolt.conf (VirtualHost) might be useful.
Make sure that the DNS A record is valid and that you have no firewall blocks.
Error logs will be appreciated aswell.

Related

How to create docker images and run it in an EC2?

I'm very much new to the Docker world. I have a docker-compose file which works fine for me.
But how do I create these Docker images and run it in an EC2.
Any help would be appreciated.
PS: I don't want to use ECS or ECR for this. I hope DockerHub should work fine for storing and retrieving these images (Correct me if I'm wrong).
Thanks.
version: "3"
services:
app:
image: node:12.13.1
volumes:
- ./:/app
working_dir: /app
depends_on:
- mongo
- nats
environment:
NODE_ENV: development
ports:
- 3000:3000
command: npm run dev
app_2:
image: node:12.13.1
volumes:
- ../app_2/:/app
working_dir: /app_2
depends_on:
- mongo
- nats
links:
- mongo
environment:
NODE_ENV: development
ports:
- 4000:4000
command: npm run dev
mongo:
image: mongo
expose:
- 27017
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
nats:
image: 'nats:2.1.2'
expose:
- "4222"
ports:
- "8222:8222"
hostname: nats-server
Install docker, docker-compose and then just run docker-compose up. Just remember to open the port 4000 of the EC2 instance and accessible from your IP or any needed ip's.

env-file and MariaDB in docker-compose

I'm trying to set up nextcloud on a Raspberry Pi 3B+ with MariaDB, roughly following this example:
https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/mariadb/apache/docker-compose.yml
My compose file looks like this:
version: '3'
services:
db:
image: mariadb
env_file:
- pi.env
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- ${BASE_PATH}/db:/var/lib/mysql
nextcloud:
image: nextcloud:apache
env_file:
- pi.env
restart: always
ports:
- 80:80
- 443:443
volumes:
- ${BASE_PATH}/www:/var/www
depends_on:
- db
environment:
- MYSQL_HOST=db
Then there is the pi.env file:
MYSQL_PASSWORD=secure-password
MYSQL_ROOT_PASSWORD=even-more-secure.password
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
BASE_PATH=/tmp
After running docker-compose up from the directory the yaml and the env file are sitting in, the two containers start up fine. Alas, the database connection can not be established because the db-container only accepts a blank password (popping up a shell in the container and running mysql -u nextcloud without handing in a password gives me database access). Still, the $MYSQL_ROOT_PASSWORD environment variable can be correctly echoed from the container.
If I start a mariadb-image alone with docker run -e MYSQL_ROOT_PASSWORD=secure-password, everything behaves as expected.
Can someone point me to my mistake?
I finally cured my setup some time ago. Sadly, I can not reconstruct what did the trick anymore (and my git commit messages were not as clear to my future self as I hoped they would be :D).
But it appears to me that exclusively declaring the environment variables for the database password in the pi.env file instead of the docker-compose.yaml did the trick.
My docker-compose.yaml:
services:
db:
image: jsurf/rpi-mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci
restart: always
volumes:
- db:/var/lib/mysql
env_file:
- pi.env
nextcloud:
image: nextcloud:apache
restart: always
container_name: nextcloud
volumes:
- www:/var/www/html
environment:
- VIRTUAL_HOST=${VIRTUAL_HOST}
- LETSENCRYPT_HOST=${VIRTUAL_HOST}
- LETSENCRYPT_EMAIL=${LETSENCRYPT_EMAIL}
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=${VIRTUAL_HOST}
- NEXTCLOUD_TRUSTED_DOMAINS=proxy
env_file:
- pi.env
depends_on:
- db
networks:
- proxy-tier
- default
pi.env:
MYSQL_PASSWORD=secure-password
MYSQL_ROOT_PASSWORD=even-more-secure.password
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
But thank you non the less #Zanndorin!
I know this is a super late answer but I just stumbled upon this while Googling something completely unrelated.
If I recall correctly you have to tell docker-compose to actually send the ENV variables to the docker by just declaring them in environment.
environment:
- MYSQL_HOST=db
- MYSQL_PASSWORD
- MYSQL_USER
I have never declared the .env-file in the docker-compose so maybe that already fixes that issue. I use it this way (I also have a .env file which I then sometimes override some values from).
Example from my developer MariaDB container:
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD

Docker service restart without order sequence when using docker-compose depends_on

I'm trying to setup a Sonarqube service in docker for windows and use mysql as database.
I am using below compose file, and using compose-file depends_on to control start up order:
db->sonarqube
But when docker services/windows restart, containers start up with no sequence.
Which will take error when sonarqube trying to connect mysql but mysql service didnt startup first.
version: '3'
services:
sonarqube:
image: sonarqube:6.5
container_name: sonarqube
restart: always
environment:
- SONARQUBE_JDBC_URL=jdbc:mysql://db:3306/sonar?useSSL=true&useUnicode=true&characterEncoding=utf8
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
ports:
- 9000:9000
- 9002:9002
- 9092:9092
volumes:
- ../../volumes/data/sonarqube/conf:/opt/sonarqube/conf
- ../../volumes/data/sonarqube/data:/opt/sonarqube/data
- ../../volumes/data/sonarqube/extensions:/opt/sonarqube/extensions
- ../../volumes/data/sonarqube/lib/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
depends_on:
- db
grafana:
container_name: grafana
image: grafana/grafana
ports:
- 3000:3000
volumes:
- ../../volumes/data/grafana-storage:/var/lib/grafana
restart: always
depends_on:
- db
# mysql service for sonarqube & grafana
db:
image: mysql:5.7
container_name: sonar-mysql
restart: always
environment:
- MYSQL_DATABASE=sonar
- MYSQL_USER=sonar
- MYSQL_PASSWORD=sonar
- MYSQL_ROOT_PASSWORD=Password1
- MAX_ALLOWED_PACKET=13421772800
volumes:
- ../../volumes/data/mysql:/var/lib/mysql
ports:
- 3306:3306
command: mysqld --max_allowed_packet=80M --federated --event_scheduler=1
After reading some of docker-compose official document,
They said the best way is to check the application code, like updating entrypoint scripts using waiting for other service online then start application.
It's the right way I think, but I still want to know if there is any way we can control containers startup sequence when docker service restart?
Thanks.

nginx-proxy : how to expose the proxy over the internet on AWS?

Firstly thank you for your time .
i was trying my hands on docker.
when i saw this article
http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
please have a look at my docker-compose.yml file , i am using below images
jwilder/nginx-proxy:latest
grafana/grafana:4.6.2
version: "2"
services:
proxy:
build: ./proxy
container_name: proxy
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
grafana:
build: ./grafana
container_name: grafana
volumes:
- grafana-data:/var/lib/grafana
environment:
VIRTUAL_HOST: grafana.localhost
GF_SECURITY_ADMIN_PASSWORD: password
depends_on:
- proxy
volumes:
grafana-data:
so when i do docker-compose up -d on my local system i am able to access the grafana container.
Now i have deploy this docker app on aws how do i access the grafana container on ec2 with VIRTUAL_HOST
any help or idea how to do this will be appreciated ! Thanks !

Docker compose up does not restart on reboot

I have successfully created docker containers and they work when loaded using:
sudo docker-compose up -d
The yml is as follows:
services:
nginx:
build: ./nginx
restart: always
ports:
- "80:80"
volumes:
- ./static:/static
links:
- node:node
node:
build: ./node
restart: always
ports:
- "8080:8080"
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
Am I supposed to create a service for this. Reading the documentation I thought that the containers would reload in restart was set to always.
FYI: the yml is inside a projects directory on the home of the base user: ubuntu.
I tried checking for solutions in stack but could not find anything appropriate. Thanks.

Resources