I have a docker-compose.yml file with the tidb container setup like this
ti-db:
image: pingcap/tidb
container_name: ti-db
ports:
- 4000:4000
logging:
driver: none
volumes:
- ./storage/tidb:/var/lib/mysql
I am trying to have it create a database called "messageservice" on startup but cannot get it to work.
In the same docker-compose file I have a mysql container where I create initial databases using init.sql file and mapping it to docker-entrypoint-initdb.d like this
- ./dbInit/init.sql:/docker-entrypoint-initdb.d/init.sql
But when I do the same thing for TiDb it does not work.
Is there a way I can setup the docker-compose file so the database gets created on docker compose up command ?
And what do you have on file init.sql?
services:
ti-db:
image: pingcap/tidb
container_name: ti-db
ports:
- 4000:4000
logging:
driver: none
volumes:
- ./storage/tidb:/var/lib/mysql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Or you can try without the file by using "command"
services:
ti-db:
image: pingcap/tidb
container_name: ti-db
ports:
- 4000:4000
logging:
driver: none
volumes:
- ./storage/tidb:/var/lib/mysql
command: --store=tikv --path="127.0.0.1:2379" --log-file="/var/log/tidb.log" --log-level=debug && mysql -h127.0.0.1 -P4000 -uroot -e "create database messageservice;"
tidb-docker-compose doesn't support ./dbInit/..., You need to execute the initialization SQL file manually.
BTW, I recommend you to ask questions here: https://ask.pingcap.com/, this is the official forum.
Related
I am using docker-compose and here is my docker-compose.yaml file:
version: "3.7"
services:
node:
container_name: my-app
image: my-app
build:
context: ./my-app-directoty
dockerfile: Dockerfile
command: npm run dev
environment:
MONGO_URL: my-database
port: 3000
volumes:
- ./my-app-directory/src:/app/src
- ./my-app-directory/node_modules:/app/node_modules
ports:
- "3000:3000"
networks:
- my-app-network
depends_on:
- my-database
my-database:
container_name: my-database
image: mongo
ports:
- "27017:27017"
networks:
- my-app-network
networks:
my-app-network:
driver: bridge
I expect to find a clear and newly created database each time I run the following command:
docker-compose build
docker-compose up
But this is not the case. When I bring the containers up with docker-compose up, my database has the exact state of the last time I shut it down with docker-compose down command. And since I have not specified a volume prop in my-database object, is this normal behaviour? Does this mean that no other action to persisting database state is required? And can I use this in production if I ever choose to use docker-compose?
The mongo image define the following volumes:
/data/configdb
/data/db
So docker-volume will create and use a unamed volume for data/db.
If you want to have a new one, use:
docker-compose down -v
docker-compose up -d --build
Or use a mount point mounted on the volume location like:
volumes:
- ./db:/data/db:rw
And drop your local db directories when you want to start over.
I am trying to learn kong, using docker-compose, i am able to run kong+konga and create services. But whenever i do docker-compose down and then up again i lose all my data:
kong:
container_name: kong
image: kong:2.1.4-alpine
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.40
volumes:
- kong_data:/usr/local/kong/declarative
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: password
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DB_UPDATE_FREQUENCY: 1m
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
depends_on:
- kong-migration
ports:
- "8001:8001"
- "8444:8444"
- "8000:8000"
- "8443:8443"
Looks like volume mapping not working. pleasE help
If you want to keep data when your kong docker-compose is down it is better to use kong in database mode.
So then you will create a persistent volume for your database and it will keep your changes.
By the kong manual you will find there are two type of database supported: postgresql and cassandra
Postgresql is my choice for small project as I'm not planning for huge horizontal scale with cassandra database.
As you will find in the manual starting your project with docker and database is very simple.
But remember to add a volume to your database service as in the sample mentioned in manual there is no volume.
For postgresql you can add: -v /custom/mount:/var/lib/postgresql/data in docker run command
or
volumes:
postgress-data:
driver: local
services:
postgress:
restart: unless-stopped
image: postgres:latest
environment:
- POSTGRES_USER=your_db_user
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=your_db_password
volumes:
- postgres-data:/var/lib/postgresql/data
Answer : You should use docker volume for having persistent data
As reference says :
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
First step is to create a volume that you want your host and docker container communicate using :
docker volume create new-volume
Second step is to use that volume in a docker-compose (in your case)
A single docker compose service with a volume looks like this:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
On the first invocation of docker-compose up the volume will be created. The same volume will be reused on following invocations.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I have successfully created docker containers and they work when loaded using:
sudo docker-compose up -d
The yml is as follows:
services:
nginx:
build: ./nginx
restart: always
ports:
- "80:80"
volumes:
- ./static:/static
links:
- node:node
node:
build: ./node
restart: always
ports:
- "8080:8080"
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
Am I supposed to create a service for this. Reading the documentation I thought that the containers would reload in restart was set to always.
FYI: the yml is inside a projects directory on the home of the base user: ubuntu.
I tried checking for solutions in stack but could not find anything appropriate. Thanks.
I am new to docker and developing a project using docker compose. From the documentation I have learned that I should be using data only containers to keep data persistant but I am unable to do so using docker-compose.
Whenever I do docker-compose down it removes the the data from db but by doing docker-compose stop the data is not removed. May be this is because that I am not creating named data volume and docker-compose down hardly removes all the containers. So I tried naming the container but it threw me errors.
Please have a look at my yml file:
version: '2'
services:
data_container:
build: ./data
#volumes:
# - dataVolume:/data
db:
build: ./db
ports:
- "5445:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
# - PGDATA=/var/lib/postgresql/data/pgdata
volumes_from:
# - container:db_bus
- data_container
geoserver:
build: ./geoserver
depends_on:
- db
ports:
- "8004:8080"
volumes:
- ./geoserver/data:/opt/geoserverdata_dir
web:
build: ./web
volumes:
- ./web:/code
ports:
- "8000:8000"
depends_on:
- db
command: python manage.py runserver 0.0.0.0:8000
nginx:
build: ./nginx
ports:
- "83:80"
depends_on:
- web
The Docker file for the data_container is:
FROM stackbrew/busybox:latest
MAINTAINER Tom Offermann <tom#offermann.us>
# Create data directory
RUN mkdir /data
# Create /data volume
VOLUME /data
I tried this but by doing docker-compose down, the data is lost. I tried naming the data_container as you can see the commented line, it threw me this error:
ERROR: Named volume "dataVolume:/data:rw" is used in service "data_container" but no declaration was found in the volumes section.
So right now what I am doing is I created a stand alone data only named container and put that in the volumes_from value of the db. It worked fine and didn't remove any data even after doing docker-compose down.
My queries:
What is the best approach to make containers that can store database's data using the docker-compose and to use them properly ?
My conscious is not agreeing with me on approach that I have opted, the one by creating a stand alone data container. Any thoughts?
docker-compose down
does the following
Stops containers and removes containers, networks, volumes, and images
created by up
So the behaviour you are experiencing is expected.
Use docker-compose stop to shutdown containers created with the docker-compose file but not remove their volumes.
Secondly you don't need the data-container pattern in version 2 of docker compose. So remove that and just use
db:
...
volumes:
- /var/lib/postgresql/data
docker-compose down stops containers but also removes them (with everything: networks, ...).
Use docker-compose stop instead.
I think the best approach to make containers that can store database's data with docker-compose is to use named volumes:
version: '2'
services:
db: #https://hub.docker.com/_/mysql/
image: mysql
volumes:
- "wp-db:/var/lib/mysql:rw"
env_file:
- "./conf/db/mysql.env"
volumes:
wp-db: {}
Here, it will create a named volume called "wp-db" (if it doesn't exist) and mount it in /var/lib/mysql (in read-write mode, the default). This is where the database stores its data (for the mysql image).
If the named volume already exists, it will be used without creating it.
When starting, the mysql image look if there are databases in /var/lib/mysql (your volume) in order to use them.
You can have more information with the docker-compose file reference here:
https://docs.docker.com/compose/compose-file/#/volumes-volume-driver
To store database data make sure your docker-compose.yml will look like
if you want to use Dockerfile
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
your docker-compose.yml will looks like
if you want to use your image instead of Dockerfile
version: '3.1'
services:
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
if you want to store or preserve data of mysql then
must remember to add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
to verify or check database data list by using this command
docker volume ls
DRIVER VOLUME NAME
local 35c819179d883cf8a4355ae2ce391844fcaa534cb71dc9a3fd5c6a4ed862b0d4
local 133db2cc48919575fc35457d104cb126b1e7eb3792b8e69249c1cfd20826aac4
local 483d7b8fe09d9e96b483295c6e7e4a9d58443b2321e0862818159ba8cf0e1d39
local 725aa19ad0e864688788576c5f46e1f62dfc8cdf154f243d68fa186da04bc5ec
local de265ce8fc271fc0ae49850650f9d3bf0492b6f58162698c26fce35694e6231c
local phphelloworld_mysql-data