I would like to know if it's possible to execute a PSQL command inside the docker-compose file.
I have the following docker-compose.yml:
version: '3'
services:
postgres:
image: postgres:9.6
container_name: postgres-container
ports:
- "5432:5432"
network_mode: host
environment:
- LC_ALL=C.UTF-8
- POSTGRES_DB=databasename
- POSTGRES_USER=username
- POSTGRES_PASSWORD=
- POSTGRES_PORT=5432
And After this is running ok, I run the following command:
docker exec -i postgres-container psql -U username -d databasename < data.sql
These 2 steps works fine. But I would ike to know if it's possible to make one single step.
Every time I want to run this command. It's important the database is always new. That's why I don't persist it in a volume and want to run this command.
Is it possible to run docker-compose up and also run the psql command?
Thanks in advance!
Pure docker-compose solution with volume,
volumes:
- ./data.sql:/docker-entrypoint-initdb.d/init.sql
According to the dockerfile, at start up, it will dump in every sql data in docker-entrypoint-initdb.d
Related
I am currently aware of two ways to get an existing MySQL database into a database Docker container. docker-entrypoint-initdb.d and source /dumps/dump.sql. I am new to dockering and would like to know if there are any differences between the two approaches. Or are there special use cases where one or the other approach is used? Thank you!
Update
How i use source:
In my docker-compose.yml file i have this few lines:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/home/dumps # <--- this is for the dump
docker exec -it my_mysql bash then
mysql -uroot -p then
create DATABASE newDB; then
use newDB; then
source /home/dumps/dump.sql
How i use docker-entrypoint-initdb.d:
But it not works.
On my host i create the folder dumps and put this dump.sql in it.
My docker-compose.yml file:
mysql:
image: mysql:5.7
container_name: laravel-2021-mysql
volumes:
- db_data:/var/lib/mysql
- ./logs/mysql:/var/log/mysql
- ./dumps/:/docker-entrypoint-initdb.d
Then: docker-compose up. But I can't find the dump in my database. I must be doing something wrong.
I can't get environmental variables in a docker-compose file written directly in it to work. A similar configuration with the command line work just fine like this:
docker run --name container_name -d --network=my-net --mount type=bind,src=/Users/t2wu/Documents/Work/Dodo/Intron-Exon_expression/DockerCompose/intronexon_db/mnt_mysql,dst=/var/lib/mysql -e MYSQL_DATABASE=db_name -e MYSQL_USER=username -e MYSQL_PASSWORD=passwd mysql/mysql-server:8.0.13
This is an MySQL instance which sets three environmental variables: MYSQL_DATABASE, MYSQL_USER and MYSQL_PASSWORD. I'm later able to launch bash into it docker exec -it container_name bash and launch the client mysql -u username -p and connects just fine.
However when I write it in a docker-compose.yml:
version: "3.7"
services:
intronexon_db:
image: mysql/mysql-server:8.0.13
volumes:
- type: bind
source: ./intronexon_db/mnt_mysql
target: /var/lib/mysql
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: username
MYSQL_PASSWORD: passwd
networks:
- my-net
networks:
my-net:
driver: bridge
Then when I use the mysql client, it's as if the user doesn't exist. How do I set it so that it is equivalent to the -e flag during docker run?
EDIT
docker-compose --version shows docker-compose version 1.24.1, build 4667896b
EDIT 2
The environmental flag did work. But I run into problem because:
Part of the problem was that it takes MySQL sometime to get the database, username and password setup ready. And I was checking it way too early.
I need to specify localhost for some reason: mysql --host=localhost -u user -p. Specifying 127.0.0.1 will not work.
For some unknown reason the example stack.yml from the official docker image did not have to specify --host when the adminer container is run. If I wipe out the adminer, then --host flag needs to be given.
Sometimes MySQL daemon will stop. It might has to do with my mount target /var/lib/mysql but I'm not certain.
command: --default-authentication-plugin=mysql_native_password is actually significant. I don't know why when I did docker run I didn't need to do anything about this.
docker-compose accept both types of ENVs either an array or a dictionary, better to double or try both approaches.
environment
Add environment variables. You can use either an array or a
dictionary. Any boolean values; true, false, yes no, need to be
enclosed in quotes to ensure they are not converted to True or False
by the YML parser.
Environment variables with only a key are resolved to their values on
the machine Compose is running on, which can be helpful for secret or
host-specific values.
environment:
RACK_ENV: development
SHOW: 'true'
SESSION_SECRET:
or
environment:
- RACK_ENV=development
- SHOW=true
- SESSION_SECRET
Might be something with docker-compose version as it working fine with 3.1. as the offical image suggested, so Better to try offical image docker-compose.yml
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
Also, better to debug such cases where everything seems correct but some minor syntax is missing. you can test it before working with DB.
version: "3.7"
services:
intronexon_db:
image: alpine
environment:
MYSQL_DATABASE: myDb
command: tail -f /dev/null
run docker-compose up
Now test and debug in testing enviroment.
docker exec -it composeenv_intronexon_db_1 ash -c "printenv"
the environment params in your yml need the - in front of them could be the likely culprit
version: "3.7"
services:
intronexon_db:
image: mysql/mysql-server:8.0.13
volumes:
- ./intronexon_db/mnt_mysql:/var/lib/mysql
environment:
- MYSQL_DATABASE: db_name
- MYSQL_USER: username
- MYSQL_PASSWORD: passwd
networks:
- my-net
networks:
my-net:
driver: bridge
Following case:
I want to build with docker-compose two containers. One is MySQL, the other is a .war File executed with springboot that is dependend on MySQL and needs a working db. After I build the mysql container, I want to fill the db with my mysqldump file, before the other container is built.
My first idea was to have it in my mysql Dockerfile as
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
but of course it wants to execute it while building.
I have no idea how to do it in the docker-compose file as Command, maybe that would work. Or do I need to build a script?
docker-compose.yml
version: "3"
services:
mysqldb:
networks:
- appsb-mysql
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=appsb
build: ./mysql
app-sb:
image: openjdk:8-jdk-alpine
build: ./app-sb/
ports:
- "8080:8080"
networks:
- appsb-mysql
depends_on:
- mysqldb
networks:
- appsb-mysql:
Dockerfile for mysqldb:
FROM mysql:5.7
COPY target/appsb.sql /
#RUN mysql -u root -p"$MYSQL_ROOT_PASSWORD"' < /appsb.sql
Dockerfile for the other springboot appsb:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/appsb.war /
RUN java -jar /appsb.war
Here is a similar issue (loading a dump.sql at start up) for a MySQL container: Setting up MySQL and importing dump within Dockerfile.
Option 1: import via a command in Dockerfile.
Option 2: exec. a bash script from docker-compose.yml
Option 3: exec. an import command from docker-compose.yml
I'm trying to dockerize an existing Rails app that uses Postgresql 9.5 as its database. In my docker-compose.yml. After a successful "docker-compose build" I can run the "docker-compose up" command and see the connection but when I navigate to localhost I get the following error.
PG::ConnectionBad
could not connect to server: No such file or directory Is the server running >locally and accepting connections on Unix domain socket >"/var/run/postgresql/.s.PGSQL.5432"?
Here is what is in my docker-compose.yml
version: '2'
services:
db:
image: postgres:9.5
restart: always
volumes:
- ./tmp/db:/var/lib/postgresql/9.5/main
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: password
POSTGRES_DB: hardware_development
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
From what I've seen I need to do some specification somewhere in my dockerfile or the docker-compose.yml but I either don't see a change or end up back at the same error.
I've been able to use Docker's own docs to use Docker Compose to create a new rails app with postgres where I see the "yay you're on rails!" page but now with my own code I can't see anything. Running the app outside of docker shows me the test page as well so its not the code within my rails app or the Postgres evnironment outside of Docker.
Your db docker-compose entry isn't exposing any ports. It needs to expose 5432. Add a ports line for that just like you have for web.
Edit: also I don't know why you added restart: always to your database container, but I wouldn't recommend that for rails or pretty much anything.
I'm using the latest orientdb docker image in my docker-compose. I need to set the default root password but it's not working. My docker-compose.yml:
orientdb:
image: orientdb
ports:
- "2434:2434"
- "2480:2480"
- "2424:2424"
volumes:
- "/mnt/sda1/dockerVolumes/orientdb:/opt/orientdb/databases"
environment:
- ORIENTDB_ROOT_PASSWORD
I'm currently running:
$ export ORIENTDB_ROOT_PASSWORD=anypw
$ docker-compose up -d
You need to define password in docker-compose:
environment:
- ORIENTDB_ROOT_PASSWORD=anypw
if you want to hide your password from docker-compose you can create docker-compose:
environment:
- ORIENTDB_ROOT_PASSWORD=${ORIENTDB_ROOT_PASSWORD}
I have been able to reproduce your solution and it works:
docker-compose.yml
version: '2'
services:
orientdb:
image: orientdb
ports:
- "2434:2434"
- "2480:2480"
- "2424:2424"
environment:
- ORIENTDB_ROOT_PASSWORD=test
now:
$ docker-compose up -d
Creating network ... with the default driver
Creating test_orientdb_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d1f0a4a81222 orientdb "server.sh" 31 seconds ago Up 22 seconds 0.0.0.0:2424->2424/tcp, 0.0.0.0:2434->2434/tcp, 0.0.0.0:2480->2480/tcp test_orientdb_1
User: root
Pass: test
You probably tried to log in, but you have not created database.
Just create one and try to log in.
You have to first run docker-compose down command.
Then you can run the docker-compose up command.
This will remove previous configuration and allow you to connect to the database.