Unable to upload data to mariadb container from docker volume - docker

I am using mariadb as mysql docker container and am having trouble uploading the data from the docker volume.
My database dockerfile is similar to the one posted at this link. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_containers/install_and_deploy_a_mariadb_container
Instead of importing the data as shown in the example in the above link, I would like to import it from a docker volume.
I did try using the docker-entrypoint.sh example where I loop through the files in docker-entrypoint-initdb.d but that gets me a myssql.sock error probably because the database is already shutdown from the dockerfile RUN command.
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret

I think your problem is this:
You want to get your schema trough mount volume, but you don't see in the database, right ?
First of all, if your intention is to use mariadb you can use the MariaDB Official Docker Image.
So, with this way you avoid to use redhat images or custom building.
Second, You're copying this SQL, but you don't run a mysql import o dump or something like that. So, what you can do is to put in your docker-compose a command (for example):
database:
build:
context: ./database
ports:
- "3306:3306"
volumes:
- ./data/data.sql:/docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_DATABASE=hell
- MYSQL_USER=test
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=secret
command: "mysql -u username -p database_name < file.sql"
But is not the best way.
On the other hand, you can follow MariaDB documentation, set a mount volume, and in the first run, import data. And with the volume you don't have to run again imports.
This is the mount that you have to put:
/my/own/datadir:/var/lib/mysql
Obviously, you can mount your sql file in a /tmp/ folder and thats it, and with docker exec execute sh and run it from inside.

Related

docker-entrypoint-initdb.d vs source for importing a database into a contrainer

I am currently aware of two ways to get an existing MySQL database into a database Docker container. docker-entrypoint-initdb.d and source /dumps/dump.sql. I am new to dockering and would like to know if there are any differences between the two approaches. Or are there special use cases where one or the other approach is used? Thank you!
Update
How i use source:
In my docker-compose.yml file i have this few lines:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/home/dumps # <--- this is for the dump
docker exec -it my_mysql bash then
mysql -uroot -p then
create DATABASE newDB; then
use newDB; then
source /home/dumps/dump.sql
How i use docker-entrypoint-initdb.d:
But it not works.
On my host i create the folder dumps and put this dump.sql in it.
My docker-compose.yml file:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/docker-entrypoint-initdb.d
Then: docker-compose up. But I can't find the dump in my database. I must be doing something wrong.

Data not persisting in docker volume

I'm using windows with linux containers. I have a docker-compose file for an api and a ms sql database. I'm trying to use volumes with the database so that my data will persist even if my container is deleted. My docker-compose file looks like this:
version: '3'
services:
api:
image: myimage/myimagename:myimagetag
environment:
- SQL_CONNECTION=myserverconnection
ports:
- 44384:80
depends_on:
- mydatabase
mydatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypassword
volumes:
- ./data:/data
ports:
- 1433:1433
volumes:
sssvolume:
everything spins up fine when i do docker-compose up. I enter data into the database and my api is able to access it. The issue I'm having is when I stop everything and try deleting my database container, then do docker-compose up again. The data is no longer there. I've tried creating an external volume first and adding
external: true
to the volumes section, but that hasn't worked. I've also messed around with the path of the volume like instead of ./data:/data I've had
sssvolume:/var/lib/docker/volumes/sssvolume/_data
but still the same thing happens. It was my understanding that if you name a volume and then reference it by name in a different container, it will use that volume.
I'm not sure if my config is wrong or if I'm misunderstanding the use case for volumes and they aren't able to do what I want them to do.
MSSQL stores data under /var/opt/mssql, so you should change your volume definition in your docker-compose file to
volumes:
- ./data:/var/opt/mssql

Docker volume associated to postgres image empty and not persistent

I have a docker-compose file to build a web server with django and a postgres database. It basically looks like that :
version: '3'
services:
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
image: backend
volumes:
- ./api:/app
ports:
- 8000:8000
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
postgres:
image: kartoza/postgis:11.0-2.5
volumes:
- pg_data:/var/lib/postgresql/data:rw
environment:
POSTGRES_DB: "gis,backend"
POSTGRES_PORT: "5432"
POSTGRES_USER: "user"
POSTGRES_PASS: "pass"
POSTGRES_MULTIPLE_EXTENSIONS: "postgis,postgis_topology"
ports:
- 5432:5432
redis:
image: "redis:alpine"
volumes:
pg_data:
I'm using a volume to make my data persistent
I managed to run my containers and add data to the database. A volume has successfully been created : docker volume ls
DRIVER VOLUME NAME
local server_pg_data
But this volume is empty as the output of docker system df -v shows:
Local Volumes space usage:
VOLUME NAME LINKS SIZE
server_pg_data 1 0B
Also, if I want or need to build the containers once again using docker-compose down and docker-compose up, data has been purged from my database. Yet, I thought that volumes were used to make data persistent on diskā€¦
I must be missing something in the way I'm using docker and volumes but I don't get what:
why does my volume appears empty while there is some data in my postgres container ?
why does my volume does not persist after doing docker-compose down ?
This thread (How to persist data in a dockerized postgres database using volumes) looked similar but the solution does not seem to apply.
The kartoza/postgis image isn't configured the same way as the standard postgres image. Its documentation notes (under "Cluster Initializations"):
By default, DATADIR will point to /var/lib/postgresql/{major-version}. You can instead mount the parent location like this: -v data-volume:/var/lib/postgresql
If you look at the Dockerfile in GitHub, you will also see that parent directory named as a VOLUME, which has some interesting semantics here.
With the setting you show, the actual data will be stored in /var/lib/postgresql/11.0; you're mounting the named volume on a different directory, /var/lib/postgresql/data, which is why it stays empty. Changing the volume mount to just /var/lib/postgresql should address this:
volumes:
- pg_data:/var/lib/postgresql:rw # not .../data

Not able to create databases using MySQL configuration inside docker

I have installed Ubuntu using virtual box and also install apt, docker and docker-compose inside it.
Now I want to create MySQL database using docker-compose.yml
In which, i defined MySQL configuration including imported database file.
When i execute following command it install MySQL but not create database.
sudo docker-compose up -d --force-recreate --build
docker-compose.yml
mysql:
image: "mysql:5.7"
network_mode: "bridge"
volumes:
- ${DATA_ROOT}/mysql:/var/lib/mysql
- ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=${ENV_PASSWORD}
- MYSQL_USER="${ENV_USER}"
- MYSQL_PASSWORD="${ENV_PASSWORD}"
- MYSQL_PORT_3306_TCP_ADDR=0.0.0.0
- MYSQL_PORT_3306_TCP_PORT=3306
command:
- --user=root
- --max_allowed_packet=500M
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --max_connections=250
- --default-authentication-plugin=mysql_native_password
ports:
- "3306:3306"
Also in some cases it allow me to create first time but when i remove whole image and volume in docker and try to recreate again then it given same issue.
I suppose there is some pre-installed file which restrict me to recreate it.(It's just my assumption)
You need to remove data directory - ${DATA_ROOT}/mysql:/var/lib/mysql if you want to initialize database with /docker-entrypoint-initdb.d as clearly mentioned in offical documentation
Usage against an existing database
If you start your mysql container instance with a data directory that
already contains a database (specifically, a mysql subdirectory), the
$MYSQL_ROOT_PASSWORD variable should be omitted from the run command
line; it will in any case be ignored, and the pre-existing database
will not be changed in any way.

docker mysql persistent storage

I have this app trying to orchestrate using docker + fig which works great for the first day of trying. It uses a data container where I want to persist my database files and a redis + mysql container(s) used by the app.
Once booted up the mysql container looks inside /var/lib/mysql for data files and, if none found, it creates the default sb which I can then populate and the files are created and also persisted in my data volume.
While I'm learning fig I had to do a fig rm --force mysql which deleted my mysql container. I did this without fear knowing that my data is safe on the data container. Running a ls on my host shows the mysql files still intact.
The problem occurs when I run fig up again which creates the mysql container again. Even though I have the same volumes shared and my old mysql files are still present this new container creates a new database as if the volume shared was empty. This only occurs if I rm the container and not if I close fig and bring it back up.
Here's my fig file if it helps:
data:
image: ubuntu:12.04
volumes:
- /data/mysql:/var/lib/mysql
redis:
image: redis:latest
mysql:
image: mysql:latest
ports:
- 3306
environment:
MYSQL_DATABASE: *****
MYSQL_ROOT_PASSWORD: *****
volumes_from:
- data
web:
build: .
dns: 8.8.8.8
command: python manage.py runserver 0.0.0.0:8000
environment:
- DEBUG=True
- PYTHONUNBUFFERED=1
volumes:
- .:/code
ports:
- "8000:8000"
links:
- data
- mysql
- redis
Any ideas why the new mysql container won't use the existing files.?
I have not used fig but will be looking into it as the simplistic syntax in your post looks pretty great. Every day I spend a couple more hours expanding my knowledge about Docker and rewire my brain as to what is possible. For about 3 weeks now I have been running a stateless data container for my MySQL instances. This has been working great with no issues.
If the content inside /var/lib/mysql does not exist when the container starts, then a script installs the needed database files. The script checks to see if the initial database files exist not just the /var/lib/mysql path.
if [[ ! -f $VOLUME_HOME/ibdata1 ]]; then
echo "=> An empty or uninitialized MySQL volume is detected in $VOLUME_HOME"
echo "=> Installing MySQL ..."
else
echo "=> Using an existing volume of MySQL"
fi
Here is a direct link to a MySQL repo I am continuing to enhance
This seems to be a bug, see this related question which has links to relevant fig/docker issues: https://stackoverflow.com/a/27562669/204706
Apparently the situation is improved with docker 1.4.1, so you should try using that version if you aren't already.

Resources