I have installed Ubuntu using virtual box and also install apt, docker and docker-compose inside it.
Now I want to create MySQL database using docker-compose.yml
In which, i defined MySQL configuration including imported database file.
When i execute following command it install MySQL but not create database.
sudo docker-compose up -d --force-recreate --build
docker-compose.yml
mysql:
image: "mysql:5.7"
network_mode: "bridge"
volumes:
- ${DATA_ROOT}/mysql:/var/lib/mysql
- ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=${ENV_PASSWORD}
- MYSQL_USER="${ENV_USER}"
- MYSQL_PASSWORD="${ENV_PASSWORD}"
- MYSQL_PORT_3306_TCP_ADDR=0.0.0.0
- MYSQL_PORT_3306_TCP_PORT=3306
command:
- --user=root
- --max_allowed_packet=500M
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --max_connections=250
- --default-authentication-plugin=mysql_native_password
ports:
- "3306:3306"
Also in some cases it allow me to create first time but when i remove whole image and volume in docker and try to recreate again then it given same issue.
I suppose there is some pre-installed file which restrict me to recreate it.(It's just my assumption)
You need to remove data directory - ${DATA_ROOT}/mysql:/var/lib/mysql if you want to initialize database with /docker-entrypoint-initdb.d as clearly mentioned in offical documentation
Usage against an existing database
If you start your mysql container instance with a data directory that
already contains a database (specifically, a mysql subdirectory), the
$MYSQL_ROOT_PASSWORD variable should be omitted from the run command
line; it will in any case be ignored, and the pre-existing database
will not be changed in any way.
Related
I am currently aware of two ways to get an existing MySQL database into a database Docker container. docker-entrypoint-initdb.d and source /dumps/dump.sql. I am new to dockering and would like to know if there are any differences between the two approaches. Or are there special use cases where one or the other approach is used? Thank you!
Update
How i use source:
In my docker-compose.yml file i have this few lines:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/home/dumps # <--- this is for the dump
docker exec -it my_mysql bash then
mysql -uroot -p then
create DATABASE newDB; then
use newDB; then
source /home/dumps/dump.sql
How i use docker-entrypoint-initdb.d:
But it not works.
On my host i create the folder dumps and put this dump.sql in it.
My docker-compose.yml file:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/docker-entrypoint-initdb.d
Then: docker-compose up. But I can't find the dump in my database. I must be doing something wrong.
I'm using windows with linux containers. I have a docker-compose file for an api and a ms sql database. I'm trying to use volumes with the database so that my data will persist even if my container is deleted. My docker-compose file looks like this:
version: '3'
services:
api:
image: myimage/myimagename:myimagetag
environment:
- SQL_CONNECTION=myserverconnection
ports:
- 44384:80
depends_on:
- mydatabase
mydatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypassword
volumes:
- ./data:/data
ports:
- 1433:1433
volumes:
sssvolume:
everything spins up fine when i do docker-compose up. I enter data into the database and my api is able to access it. The issue I'm having is when I stop everything and try deleting my database container, then do docker-compose up again. The data is no longer there. I've tried creating an external volume first and adding
external: true
to the volumes section, but that hasn't worked. I've also messed around with the path of the volume like instead of ./data:/data I've had
sssvolume:/var/lib/docker/volumes/sssvolume/_data
but still the same thing happens. It was my understanding that if you name a volume and then reference it by name in a different container, it will use that volume.
I'm not sure if my config is wrong or if I'm misunderstanding the use case for volumes and they aren't able to do what I want them to do.
MSSQL stores data under /var/opt/mssql, so you should change your volume definition in your docker-compose file to
volumes:
- ./data:/var/opt/mssql
I am using mariadb as mysql docker container and am having trouble uploading the data from the docker volume.
My database dockerfile is similar to the one posted at this link. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_containers/install_and_deploy_a_mariadb_container
Instead of importing the data as shown in the example in the above link, I would like to import it from a docker volume.
I did try using the docker-entrypoint.sh example where I loop through the files in docker-entrypoint-initdb.d but that gets me a myssql.sock error probably because the database is already shutdown from the dockerfile RUN command.
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
I think your problem is this:
You want to get your schema trough mount volume, but you don't see in the database, right ?
First of all, if your intention is to use mariadb you can use the MariaDB Official Docker Image.
So, with this way you avoid to use redhat images or custom building.
Second, You're copying this SQL, but you don't run a mysql import o dump or something like that. So, what you can do is to put in your docker-compose a command (for example):
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
command: "mysql -u username -p database_name < file.sql"
But is not the best way.
On the other hand, you can follow MariaDB documentation, set a mount volume, and in the first run, import data. And with the volume you don't have to run again imports.
This is the mount that you have to put:
/my/own/datadir:/var/lib/mysql
Obviously, you can mount your sql file in a /tmp/ folder and thats it, and with docker exec execute sh and run it from inside.
I have a few docker-compose files to test different environments, for example testing vs development vs production.
My main issue is using the postgres image, creating different databases for each environment. Here is an example of two different environments' docker-compose.yml files:
docker-compose.first.yml
version: '3'
services:
db:
image: "postgres"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=first
- POSTGRES_DB=first
ports:
- 5432:5432
docker-compose.second.yml
version: '3'
services:
db:
image: "postgres"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=second
- POSTGRES_DB=second
ports:
- 5432:5432
If I do docker-compose -f docker-compose.first.yml up, I want it to build with the previous volumes of the first docker-compose file.
If I do docker-compose -f docker-compose.second.yml up, I want it to use the volumes of the second docker-compose file.
Right now, the behavior is that each of these files will use the same volumes, so unless I do docker-compose -f docker-compose.first.yml -v down before using the second one, there won't be any change, and I'll lose the volumes of the first one! How can I keep these separate?
Note: These files are in the same directory, does that make a difference?
The answer here after doing research about same-directory docker-compose files is that the names determine if a container will be freshly created or recreated, determining the volumes it uses.
Docker-compose by default uses the directory name as the "compose project name" so the name could be mytestproject_db_1 because it goes project-name_container-name_numerator. Since the compose files are in the same directory, they have the same name since none of the factors change.
To fix this, you manually change the "compose project name" using the -p option, so I could do docker-compose up -p test-myapp or docker-compose up -p prod-myapp to make sure the compose files won't be linked.
More info: https://github.com/docker/compose/issues/2120
I have this app trying to orchestrate using docker + fig which works great for the first day of trying. It uses a data container where I want to persist my database files and a redis + mysql container(s) used by the app.
Once booted up the mysql container looks inside /var/lib/mysql for data files and, if none found, it creates the default sb which I can then populate and the files are created and also persisted in my data volume.
While I'm learning fig I had to do a fig rm --force mysql which deleted my mysql container. I did this without fear knowing that my data is safe on the data container. Running a ls on my host shows the mysql files still intact.
The problem occurs when I run fig up again which creates the mysql container again. Even though I have the same volumes shared and my old mysql files are still present this new container creates a new database as if the volume shared was empty. This only occurs if I rm the container and not if I close fig and bring it back up.
Here's my fig file if it helps:
data:
image: ubuntu:12.04
volumes:
- /data/mysql:/var/lib/mysql
redis:
image: redis:latest
mysql:
image: mysql:latest
ports:
- 3306
environment:
MYSQL_DATABASE: *****
MYSQL_ROOT_PASSWORD: *****
volumes_from:
- data
web:
build: .
dns: 8.8.8.8
command: python manage.py runserver 0.0.0.0:8000
environment:
- DEBUG=True
- PYTHONUNBUFFERED=1
volumes:
- .:/code
ports:
- "8000:8000"
links:
- data
- mysql
- redis
Any ideas why the new mysql container won't use the existing files.?
I have not used fig but will be looking into it as the simplistic syntax in your post looks pretty great. Every day I spend a couple more hours expanding my knowledge about Docker and rewire my brain as to what is possible. For about 3 weeks now I have been running a stateless data container for my MySQL instances. This has been working great with no issues.
If the content inside /var/lib/mysql does not exist when the container starts, then a script installs the needed database files. The script checks to see if the initial database files exist not just the /var/lib/mysql path.
if [[ ! -f $VOLUME_HOME/ibdata1 ]]; then
echo "=> An empty or uninitialized MySQL volume is detected in $VOLUME_HOME"
echo "=> Installing MySQL ..."
else
echo "=> Using an existing volume of MySQL"
fi
Here is a direct link to a MySQL repo I am continuing to enhance
This seems to be a bug, see this related question which has links to relevant fig/docker issues: https://stackoverflow.com/a/27562669/204706
Apparently the situation is improved with docker 1.4.1, so you should try using that version if you aren't already.