Run sql script placed in docker volumen - docker

In my docker-compose.yml I placed init.sql into volumen.
version: '3'
services:
mysqldb:
image: mysql:5.7.22
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/1-init.sql
I know that I should run this script via Dockerfile file. How can I achieve this?

The official Docker mysql image will run everything present in /docker-entrypoint-initdb.d when the database is first initialized (see "Initializing a fresh instance" on that page). Since you're injecting it into the container using a volume, if the database doesn't already exist, your script will be run automatically as you have it.
That page also suggests creating a custom Docker image. The Dockerfile would be very short
FROM mysql:5.7.22
COPY init.sql /docker/entrypoint-initdb.d/1-init.sql
and then once you built the modified image you wouldn't need a copy of the script locally to have it run at first start.

If you want to run a init script every time you run the container, You could write it as below::
services:
mysqldb:
image: mysql:5.7.22
container_name: mysql
restart: always
command: --init-file /docker-entrypoint-initdb.d/1-init.sql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=homestead
- MYSQL_USER=root
- MYSQL_PASSWORD=secret
volumes:
- dbdata:/var/lib/mysql`
`

Related

Docker-compose starts a single container

I'm using docker-compose and I'm trying to run an express app and postgres db in docker containers.
My problem is that it only starts postgres image, but express app is not running.
What am I doing wrong?
I've published it on my github: https://github.com/ayakymyshyn/docker-playground
looking at your docker-compose file and Dockerfile, i assume that your intention is that the web service in the compose will actually run the image produced by the Dockerfile.
if that is the case, you need to modify the compose file and tell it to build an image based on the Dockerfile.
it should look something like
version: "3.7"
services:
web:
image: node
build: . # <--- this is the missing line
depends_on:
- db
ports:
- '3001:3001'
db:
image: postgres
environment:
POSTGRES_PASSWORD: 123123123
POSTGRES_USER: yakym
POSTGRES_DB: jwt
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- '5433:5433'

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

Why aren't my docker images, built by "docker-compose build", using the correct version of my code?

Docker doesn't use the latest code after running git checkout <non_master_branch>, while I can see it in the vscode.
I am using the following docker-compose file:
version: '2'
volumes:
pgdata:
backend_app:
services:
nginx:
container_name: nginx-angular-dev
image: nginx-angular-dev
build:
context: ./frontend
dockerfile: /.docker/nginx.dockerfile
ports:
- "80:80"
- "443:443"
depends_on:
- web
web:
container_name: django-app-dev
image: django-app-dev
build:
context: ./backend
dockerfile: /django.dockerfile
command: ["./wait-for-postgres.sh", "db", "./django-entrypoint.sh"]
volumes:
- backend_app:/code
ports:
- "8000:8000"
depends_on:
- db
env_file: .env
environment:
FRONTEND_BASE_URL: http://192.168.99.100/
BACKEND_BASE_URL: http://192.168.99.100/api/
MODE_ENV: DOCKER_DEV
db:
container_name: django-db
image: postgres:10
env_file: .env
volumes:
- pgdata:/var/lib/postgresql/data
I have tried docker-compose build --no-cache, followed by docker-compose up --force-recreate but it didn't solve the problem.
What is the root of my problem?
Your volumes: are causing problems. Docker volumes aren't intended to hold code, and you should delete the volume declarations that mention backend_app:.
Your docker-compose.yml file says in part:
volumes:
backend_app:
services:
web:
volumes:
- backend_app:/code
backend_app is a named volume: it keeps data that must be persisted across container runs. If the volume doesn't exist yet the first time then data will be copied into it from the image, but after that, Docker considers it to contain critical user data that must not be updated.
If you keep code or libraries in a Docker volume, Docker will never update it, even if the underlying image changes. This is a common problem in JavaScript applications that mount an anonymous volume on their node_modules directory.
As a temporary workaround, if you docker-compose down -v, it will delete all of the volumes, including the one with your code in it, and the next time you start it will get recreated from the image.
The best solution is to simply not use a volume here at all. Delete the lines above from your docker-compose.yml file. Develop and test your application in a non-Docker environment, and when you're ready to do integration testing, run docker-compose up --build. Your code will live in the image, and an ordinary docker build will produce a new image with new code.

Docker compose up does not restart on reboot

I have successfully created docker containers and they work when loaded using:
sudo docker-compose up -d
The yml is as follows:
services:
nginx:
build: ./nginx
restart: always
ports:
- "80:80"
volumes:
- ./static:/static
links:
- node:node
node:
build: ./node
restart: always
ports:
- "8080:8080"
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
Am I supposed to create a service for this. Reading the documentation I thought that the containers would reload in restart was set to always.
FYI: the yml is inside a projects directory on the home of the base user: ubuntu.
I tried checking for solutions in stack but could not find anything appropriate. Thanks.

Docker Compose - How to store database data?

I am new to docker and developing a project using docker compose. From the documentation I have learned that I should be using data only containers to keep data persistant but I am unable to do so using docker-compose.
Whenever I do docker-compose down it removes the the data from db but by doing docker-compose stop the data is not removed. May be this is because that I am not creating named data volume and docker-compose down hardly removes all the containers. So I tried naming the container but it threw me errors.
Please have a look at my yml file:
version: '2'
services:
data_container:
build: ./data
#volumes:
# - dataVolume:/data
db:
build: ./db
ports:
- "5445:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
# - PGDATA=/var/lib/postgresql/data/pgdata
volumes_from:
# - container:db_bus
- data_container
geoserver:
build: ./geoserver
depends_on:
- db
ports:
- "8004:8080"
volumes:
- ./geoserver/data:/opt/geoserverdata_dir
web:
build: ./web
volumes:
- ./web:/code
ports:
- "8000:8000"
depends_on:
- db
command: python manage.py runserver 0.0.0.0:8000
nginx:
build: ./nginx
ports:
- "83:80"
depends_on:
- web
The Docker file for the data_container is:
FROM stackbrew/busybox:latest
MAINTAINER Tom Offermann <tom#offermann.us>
# Create data directory
RUN mkdir /data
# Create /data volume
VOLUME /data
I tried this but by doing docker-compose down, the data is lost. I tried naming the data_container as you can see the commented line, it threw me this error:
ERROR: Named volume "dataVolume:/data:rw" is used in service "data_container" but no declaration was found in the volumes section.
So right now what I am doing is I created a stand alone data only named container and put that in the volumes_from value of the db. It worked fine and didn't remove any data even after doing docker-compose down.
My queries:
What is the best approach to make containers that can store database's data using the docker-compose and to use them properly ?
My conscious is not agreeing with me on approach that I have opted, the one by creating a stand alone data container. Any thoughts?
docker-compose down
does the following
Stops containers and removes containers, networks, volumes, and images
created by up
So the behaviour you are experiencing is expected.
Use docker-compose stop to shutdown containers created with the docker-compose file but not remove their volumes.
Secondly you don't need the data-container pattern in version 2 of docker compose. So remove that and just use
db:
...
volumes:
- /var/lib/postgresql/data
docker-compose down stops containers but also removes them (with everything: networks, ...).
Use docker-compose stop instead.
I think the best approach to make containers that can store database's data with docker-compose is to use named volumes:
version: '2'
services:
db: #https://hub.docker.com/_/mysql/
image: mysql
volumes:
- "wp-db:/var/lib/mysql:rw"
env_file:
- "./conf/db/mysql.env"
volumes:
wp-db: {}
Here, it will create a named volume called "wp-db" (if it doesn't exist) and mount it in /var/lib/mysql (in read-write mode, the default). This is where the database stores its data (for the mysql image).
If the named volume already exists, it will be used without creating it.
When starting, the mysql image look if there are databases in /var/lib/mysql (your volume) in order to use them.
You can have more information with the docker-compose file reference here:
https://docs.docker.com/compose/compose-file/#/volumes-volume-driver
To store database data make sure your docker-compose.yml will look like
if you want to use Dockerfile
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
your docker-compose.yml will looks like
if you want to use your image instead of Dockerfile
version: '3.1'
services:
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
if you want to store or preserve data of mysql then
must remember to add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
to verify or check database data list by using this command
docker volume ls
DRIVER VOLUME NAME
local 35c819179d883cf8a4355ae2ce391844fcaa534cb71dc9a3fd5c6a4ed862b0d4
local 133db2cc48919575fc35457d104cb126b1e7eb3792b8e69249c1cfd20826aac4
local 483d7b8fe09d9e96b483295c6e7e4a9d58443b2321e0862818159ba8cf0e1d39
local 725aa19ad0e864688788576c5f46e1f62dfc8cdf154f243d68fa186da04bc5ec
local de265ce8fc271fc0ae49850650f9d3bf0492b6f58162698c26fce35694e6231c
local phphelloworld_mysql-data

Resources