I am trying to create a mysql database schema during the docker-compose.yml file is getting executed
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
links:
- web
onrun:
command: "docker exec -i test_mysql_1 mysql -uroot -proot test <dummy1.sql"
I tried onrun but this is not working .
i am building the first image but pulling the second image from the docker hub.
kindly help in how to execute the following command after the docker-compose up
There is nothing like onrun in docker-compose. It will only bring up the containers and execute the command. Now you have few possible options
Use mysql Image Initialization
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
volumes:
- ./dummy1.sql:/docker-entrypoint-initdb.d/dummy1.sql
ports:
- "3306:3306"
You may your sql files inside /docker-entrypoint-initdb.d inside the container
Use bash script
docker-compose up -d
# Give some time for mysql to get up
sleep 20
docker-compose exec mysql mysql -uroot -proot test <dummy1.sql
Use another docker service to initialize the DB
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
mysqlinit:
image: mysql:latest
volumes:
- ./dummy1.sql:/dump/dummy1.sql
command: bash -c "sleep 20 && mysql -h mysql -uroot -proot test < /dump/dummy1.sql"
You run another service which will init the DB for you, like mysqlinit in the above one
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order.
From https://hub.docker.com/_/mysql/
That is the convenient way how many databases (postgresql, mysql, ...) are initializing themselves on container-creation. You should create a *.sql / *.sh file and bind it via volume into the new container:
db:
image: mysql:latest
volumes:
- ./db/entrypoint:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=iamgroot
- MYSQL_DATABASE=gotg
This loads all your sql / sh files into the container which are then automatically executed.
Related
I have successfully containerized my basic Yii2 application with docker and it runs on localhost:8000. However, I cannot use the app effectively as most of its data are stored in migration files. Is there a way I could export the migrations into docker after running it? (or during execution)
This is my docker compose file
version: '2'
services:
php:
image: yiisoftware/yii2-php:7.1-apache
volumes:
- ~/.composer-docker/cache:/root/.composer/cache:delegated
- ./:/app:delegated
ports:
- '8000:80'
networks:
- my-network
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_DATABASE=my-db
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- mydb:/var/lib/mysql
networks:
- my-network
memcached:
container_name: memcached
image: memcached:latest
ports:
- "0.0.0.0:11211:11211"
volumes:
restatdb:
networks:
my-network:
driver: bridge
and my Dockerfile
FROM alpine:3.4
ADD . /
COPY ./config/web.php ./config/web.php
COPY . /var/www/html
# Let docker create a volume for the session dir.
# This keeps the session files even if the container is rebuilt.
VOLUME /var/www/html/var/sessions
It is possible to run yii commands in docker. First let the yii2 container run in the background or another tab of the terminal. The yii commands can be run using the docker exec on the interactive interface which would let us interact with the running container
sudo docker exec -i <container-ID> php yii migrate/up
You can get the container ID using
sudo docker ps
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I'm trying to get my head around docker. I got it working with my Rails 6 application, it builds and runs succesful. Now I want to push the application into my docker hub repository.
I'm not quite sure how to do this, because I got 3 containers but in every tutorial I read the people just push one.
That's the output of docker ps -a:
That's my docker-compose.yml:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
build: .
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I read about the --link flag, but this seems to be deprecated.
So: Do I need to push all containers or is there a way to get them into one container?
That is easy in short as simple as: if you have multiple images, you need to push multiple times ;)
Now the helpful answer, you have 2 images: postgress and your current working directory. Postgress is already an image you download from docker hub, so no need to push it.
As for your other 2 apps, they are currently in one docker file, so only one push is needed.
In your compose file they are both using the build: ., therefore they share the docker image. Let's say you pushed it with: docker push max-kirsch/my-cool-app:1.0.0, you would change the docker compose file to look like this:
version: '3'
services:
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
ports:
- "5432"
web:
image: max-kirsch/my-cool-app:1.0.0
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/webmenue
- bundler_gems:/bundle
- ./docker/database.yml:/webmenue/config/database.yml
- cache:/cache
ports:
- "3000:3000"
env_file:
- .env
environment:
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=webpacker
- SPROCKETS_CACHE=/cache
depends_on:
- db
webpacker:
image: max-kirsch/my-cool-app:1.0.0
command: ./bin/webpack-dev-server
volumes:
- .:/webmenue
ports:
- '3035:3035'
environment:
- NODE_ENV=development
- RAILS_ENV=development
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
bundler_gems:
cache:
I think you are confused because you have 2 apps in your compose file that shares the same docker file. And it might be that this confusing is correct. In my personal opinion, you should create them both there ow docker file where only the components are installed that you need. It is hard to be sure without seeing the docker file, but as an example, it does not look like your webpacker needs ruby installed and can start with a simple node image as the base instead of installing node in your ruby image.
I am trying to run the following docker-compose file:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
command: bash /opt/sql/create-db.sql
# command: ps -aux
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
I am encountering an error with the line:
command: bash /opt/sql/create-db.sql
It is because pgsql service is not started. It can be monitored with command: ps -aux
How can I run my script once pgsql service is started ?
You can use a volume to provide an initialization sql script:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
This will work because original Posgresql dockerfile contains a script (that runs after Posrgres has been started) which will execute any *.sql files from /docker-entrypoint-initdb.d/ folder.
By mounting your local volume in that place, your sql files will be run at the right time.
It's actually mentioned in documentation for that image: https://hub.docker.com/_/postgres under the How to extend this image section.
Im porting my rails app from my local machine into a docker container and running into an issue with elasticsearch/searchkick. I can get it working temporarily but Im wondering if there is a better way. So basically the port for elasticsearch isnt matching up with the default localhost:9200 that searchkick uses. Now I have used "docker inspect" on the elasticsearch container and got the actual IP and then set the ENV['ELASTICSEARCH_URL'] variable like the searchkick docs say and it works. The problem Im having is that is a pain if I restart/change the containers the IP changes sometimes and I have to go through the whole process again. Here is my docker-compose.yml:
version: '2'
services:
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
volumes:
- .:/living-recipe
ports:
- '3000:3000'
env_file:
- .env
depends_on:
- postgres
- elasticsearch
postgres:
image: postgres
elasticsearch:
image: elasticsearch
use elasticsearch:9200 instead of localhost:9200. docker compose exposes the container via it's name.
Here is the docker-compose.yml that is working for me
docker compose will expose the container vaia it's name, so you can set
ELASTICSEARCH_URL: http://elasticsearch:9200 ENV variable in your rails application container
version: "3"
services:
db:
image: postgres:9.6
restart: always
volumes:
- /tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
volumes:
- .:/app
ports:
- 9200:9200
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/app"
ports:
- "3001:3000"
depends_on:
- db
environment:
DB_HOST: db
DB_PASSWORD: password
ELASTICSEARCH_URL: http://elasticsearch:9200
You don't want to try to map the IP address for elasticsearch manually, as it will change.
Swap out depends_on for links. This will create the same dependency, but also allows the containers to be reached via service name.
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they determine the order of service startup.
Docker Compose File Reference - Links
Then in your rails app where you're setting ENV['ELASTICSEARCH_URL'], use elasticsearch instead.