New to docker and I'm using laradock to set up my environment with a Craft CMS project. I'm able to get it installed and set up, however I'm have an issue when trying to connect to the MySQL container that laradock creates from the docker-compose.yml file.
Here's the db related portion in my docker-compose.yml
mysql:
build:
context: ./mysql
args:
- MYSQL_DATABASE=homestead
- MYSQL_USER=homestead
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=root
volumes:
- mysql:/var/lib/mysql
ports:
- "3306:3306"
Within my craft/config/db.php file, I have the following settings:
return array(
'.dev' => array(
'tablePrefix' => 'craft',
'server' => 'mysql',
'database' => 'homestead',
'user' => 'homestead',
'password' => 'secret',
),
);
However, I'm getting a Craft can’t connect to the database with the credentials in craft/config/db.php error.
My question is - when docker creates the MySQL container, I'm assuming it uses the credentials in the docker-compose.yml file to create and allow access to that database. If so, as long as my container is running and my credentials from my db.php file match with the credentials in the docker-compose.yml file, shouldn't it connect?
If I wanted to update the credentials in the MySQL container, can I simply update the values in both files and run docker-compose up -d mysql?
Thanks!
I have run in kind of similar issue recently because the containers were started in a "random" order. Maybe this is your issue too don't know for sure.
In a brief this is my case:
- two containers php-fpm and mysql.
- Running docker-compose up -d --build --no-cache build everything but php-fpm finished first so by then mysql was doing some stuff for get service ready
- php-fpm application couldn't connect to MySQL server because wasn't ready
The solution use the new Docker Compose version and use 2.1 on the docker-compose.yml. Here is my working example:
version: '2.1'
services:
php-fpm:
build:
context: docker/php-fpm
depends_on:
db:
condition: service_healthy
db:
image: mariadb
healthcheck:
test: "exit 0"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
The trick: depends_on (check here) and condition (check the example).
Related
They have sent me to do an exercise in docker and I have no idea how to do it, I find myself super lost.
What is requested is the following:
"Create a docker compose file that launches two Maria db databases, in 3 different environments "
"Depending on the environment, they should run on 3 different ports:
Development: 3306
Production: 3307
Testing: 3308
The environment should be sent as a parameter added to the command
docker compose "
The databases should be interconnected with each other
and greater than this: "With the command:
docker-compose docker-file --dev docker-compose docker-file --pre docker-compose docker-file --pro
passing that parameter, in the first one I would run a production environment, in the other preproduction and in the other development
In each environment there will be variables that change, such as the database port. "
Up to here is all the information that they have given me and everything that has been asked of me.
Could someone help me solve this ???
The only thing I managed to do was create the 2 databases, but I am missing the environment section, which is what I don't understand.
Code I have in a docker-compose.yml file:
version: '3'
services:
mariadb:
image: mariadb
restart: always
environment:
MYSQL_DATABASE: 'test'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'root'
MYSQL_ROOT_PASSWORD: 'root'
ports:
- 3306:3306
expose:
- "3306"
volumes:
- ./mariadb:/var/lib/mysql
mariadb2:
image: mariadb
environment:
MYSQL_DATABASE: 'test2'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'root'
MYSQL_ROOT_PASSWORD: 'root'
ports:
- 3305:3305
volumes:
- ./mariadb2:/var/lib/mysql
When you launch a docker-compose you can add the enviroment parameters by a .env file. I think that is what they ment when they asked you to do the exercise.
The .env file is loaded by default when you launch the docker-compose but you can specify it.
You may like to read the documentation first so you can try things out https://docs.docker.com/compose/environment-variables/
I was reading a, here, here, here, here, here
none of which answers my question.
What I want to do?
I have a mysql docker-compose image and want to connect to it from my ubuntu host using
localhost:3306
this does not work
it does work if I use
0.0.0.0:3306
which is not what I want to do. Why do I want to do all of that? because I have to start to work on an oooold legacy app, that has an old mysql version. Now I have mysql 8.0 on my computer and dont want to downgrade just for that one project. The legacy code has about 1000 references to localhost:3306 in it. Now I could refactor all that, create a config file etc... but better would be, if I could make it work so that localhost:3306 actually accesses my mysql docker-compose container. Is that possible? What do i have to add to my docker-compose yaml file?
my mysql docker-compose yaml file is this:
version: '3.3'
services:
sciodb:
container_name: sciodb
image: mysql:5.6
restart: always
environment:
MYSQL_DATABASE: 'db'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'myuser'
# You can use whatever password you like
MYSQL_PASSWORD: 'test1234'
# Password for root access
MYSQL_ROOT_PASSWORD: 'test1234'
ports:
# <Port exposed> : < MySQL Port running inside container>
- '3306:3306'
expose:
# Opens port 3306 on the container
- '3306'
# Where our data will be persisted
volumes:
- /home/myuser/nmyapp_db:/var/lib/mysql
- /media/sf_vmwareshare:/var/vmwareshare
I'm trying to run mysql under container with mysql parameters i defined on docker-compose.yml file. But i have an access denied when i run :
mysql -utest -ptest
I'm only able to connect with mysql -uroot -proot.
Help me please.
Thanks.
mysql:
container_name: mysql
image: mysql
restart: always
volumes:
- .docker/data/db:/var/lib/mysql
environment:
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: test
MYSQL_USER: test
MYSQL_PASSWORD: test
Try to launch with specified database name like this:
mysql -u test -p test app
Explanation:
MYSQL_USER, MYSQL_PASSWORD
These variables are optional, used in conjunction to create a new user and to set that user's password. This user will be granted superuser permissions (see above) for the database specified by the MYSQL_DATABASE variable. Both variables are required for a user to be created.
From MySQL docker hub page
Permissions are granted only for the database specified by environment variable. When you try to log into default database you have no permissions to it only for app database.
My complete docker-compose file.
version: '3.2'
services:
apache:
container_name: apache
build: .docker/apache/
restart: always
volumes:
- .:/var/www/html/app/
ports:
- 80:80
depends_on:
- php
- mysql
links:
- mysql:mysql
php:
container_name: php
build: .docker/php/
restart: always
volumes:
- .:/var/www/html/app/
working_dir: /var/www/html/app/
mysql:
container_name: mysql
image: mysql
restart: always
volumes:
- .docker/data/db:/var/lib/mysql
environment:
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: test
MYSQL_USER: test
MYSQL_PASSWORD: test
Maybe you could try attaching an interactive bash process to the already running container by following these steps:
Get your container id or name from running docker container ls in your terminal (I'm talking about the mysql container, which should have the mysql name according to your docker-compose.yml file)
Run docker exec -it mysql bash to associate an interactive bash process to the running container
Now, being inside of your container's filesystem, run mysql --user=test --password=test and you should be able to get on with your work
I am new to docker, what a wonderful tool!. Following the Django tutorial, their docs provide a basic docker-compose.yml, that looks similar to the following one that I've created.
version: '3'
services:
web:
build: .
container_name: web
command: python manage.py migrate
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./src:/src
ports:
- "8000:8000"
depends_on:
- postgres
postgres:
image: postgres:latest
container_name: postgres
environment:
POSTGRES_USER: my_user
POSTGRES_PASSWORD: my_secret_pass!
POSTGRES_DB: my_db
ports:
- "5432:5432"
However, in every single docker-compose file that I see around, the following is added:
volumes:
- ./postgres-data:/var/lib/postgresql/data
What are those volumes used for? Does it mean that if I now restart my postgres container all my data is deleted, but if I had the volumes it is not?
Is my docker-compose.yml ready for production?
What are those volumes used for?
Volumes persist data from your container to your Docker host.
This:
volumes:
- ./postgres-data:/var/lib/postgresql/data
means that /var/lib/postgresql/data in your container will be persisted in ./postgres-data in your Docker host.
What #Dan Lowe commented is correct, if you do docker-compose down without volumes, all the data insisde your containers will be lost, but if you have volumes the directories, and files you specified will be kept in your Docker host
You can see this data in your Docker host in /var/lib/docker/volumes/<your_volume_name>/_data even after your container don't exist anymore.
I am trying to set up an extensible docker production environment for a few projects on a virtual machine.
My setup is as follows:
Front end: (this works as expected: thanks to Tevin Jeffery for this)
# ~/proxy/docker-compose.yml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- '/etc/nginx/vhost.d'
- '/usr/share/nginx/html'
- '/etc/nginx/certs:/etc/nginx/certs:ro'
- '/var/run/docker.sock:/tmp/docker.sock:ro'
networks:
- nginx
letsencrypt-nginx-proxy:
container_name: letsencrypt-nginx-proxy
image: 'jrcs/letsencrypt-nginx-proxy-companion'
volumes:
- '/etc/nginx/certs:/etc/nginx/certs'
- '/var/run/docker.sock:/var/run/docker.sock:ro'
volumes_from:
- nginx-proxy
networks:
- nginx
networks:
nginx:
driver: bridge
Database: (planning to add postgres to support rails apps as well)
# ~/mysql/docker-compose.yml
version: '2'
services:
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: wordpress
# ports:
# - 3036:3036
networks:
- db
networks:
db:
driver: bridge
And finaly a wordpress blog to test if everything works:
# ~/wp/docker-compose.yml
version: '2'
services:
wordpress:
image: wordpress
# external_links:
# - mysql_db_1:mysql
ports:
- 8080:80
networks:
- proxy_nginx
- mysql_db
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
# WORDPRESS_DB_HOST: mysql_db_1:3036
# WORDPRESS_DB_HOST: mysql
# WORDPRESS_DB_HOST: mysql:3036
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: wordpress
networks:
proxy_nginx:
external: true
mysql_db:
external: true
My problem is that the Wordpress container can not connect to the database. I get the following error when I try to start (docker-compose up) the Wordpress container:
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 22
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
wp_wordpress_1 exited with code 1
UPDATE:
I was finally able to get this working. my main problem was relying on the container defaults for the environment variables. This created an automatic data volume with without a database or user for word press. After I added explicit environment variables to the mysql and Wordpress containers, I removed the data volume and restarted both containers. This forced the mysql container to recreate the database and user.
To ~/mysql/docker-compose.yml:
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
and to ~/wp/docker-compose.yml:
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
One problem with docker-compose is that although sometimes your application is linked to your database, the application will NOT wait for your database to be up and ready. Here is an official Docker read:
https://docs.docker.com/compose/startup-order/
I've faced a similar problem where my test application would fail because it couldn't connect to my server because it wasn't up and running yet.
I made a similar workaround to the article posted in the link above by running a shell script to ping the address of the DB until it is available to be used. This script should be the last CMD command in your application.
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
# Until the mysql sends a 200 HTTP response, we're going to keep checking
until [ $RESPONSE -eq "200" ]; do
sleep 2
echo "MySQL is not ready yet.. retrying... RESPONSE: ${RESPONSE}"
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
done
# Once we know the server's up, we can start run our application
enter code to start your application here
I'm not 100% sure if this is the problem you're having. Another way to debug your problem is to run docker-compose in detached mode with the -d flag and run docker ps to see if your database is even running. If it is running, run docker logs $YOUR_DB_CONTAINER_ID to see if MySQL is giving you any errors when starting