I'm trying to automate a postgres database backup which is running in docker,
#!/bin/bash
POSTGRES_USERNAME="XXX"
CONTAINER_NAME="YYY"
DB_NAME="ZZZ"
BACKUP="/tmp/backup/${DB_NAME}.sql.gz"
docker exec -it -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} |
gzip -c > ${BACKUP}
exit
if i run this manually, i can get the backup, but if i schedule the script into cronjob means, i got empty folder,
can anyone please help me ?
Hello everyone thank you so much for your immediate response, i think
i have fixed my issue.
Cronjob failed due to --interactive mode
docker exec -it -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} | gzip -c > ${BACKUP}
Removed i --interactive from shell script, then it's works perfect.
docker exec -t -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} | gzip -c > ${BACKUP}
Related
I try to translate this command with Powershell Core
docker run --rm -e PGPASSWORD=$Env:PGPASSWORD postgres:14.2 pg_restore --clean -d db -h mydb.com -U sa --format=custom < pg.dump
And I have this error
The '<' operator is reserved for future use.
I've tried many things like
echo pg.dump | docker run --rm -e PGPASSWORD=$Env:PGPASSWORD postgres:14.2 pg_restore --clean -d db -h mydb.com -U sa --format=custom
pg_restore: error: could not read from input file: end of file
echo pg.dump | docker run --rm -e -it PGPASSWORD=$Env:PGPASSWORD postgres:14.2 pg_restore --clean -d db -h mydb.com -U sa --format=custom
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
I can't find how to use the stdin operator with Powershell Core
Try to change sequence of your actions. Run container -> Copy backup file into container -> Run command pg_restore
Here is an algorithm (not the working code) to execute in powershell. Note that docker cp may work only with container id. You can get it if you parse it via docker ps | Select-String -Pattern "my_postgres"
Set-Location "C:\postgres_backup"
Write-Output "Starting postgres container"
docker run --rm --name=my_postgres --env PGPASSWORD=$Env:PGPASSWORD postgres:14.2
Write-Output "Copying backup to container"
docker cp ./pg.dump my_postgres:/pg.dump
Write-Output "Execute something in postgres container"
docker exec -i $my_postgres pg_restore --clean -d db -h mydb.com -U sa --format=custom -1 pg.dump
I've been fiddling a lot with a cron job, and so far I cannot make it work properly.
Here is the command:
docker exec mosquitto mosquitto_pub -h localhost -p 1883 -u LOGIN -P PASSWORD -t rtorrent_ntorrents -m "{\"ntorrents\": $(docker exec -it box rtxmlrpc download_list "" | wc -l)}"
Here is what I tried:
putting it in a cronjob to be executed every minute:
* * * * * /usr/bin/zsh -c 'docker exec mosquitto mosquitto_pub -h localhost -p 1883 -u LOGIN -P PASSWORD -t rtorrent_ntorrents -m "{\"ntorrents\": $(docker exec -it box rtxmlrpc download_list "" | wc -l)}"'
-> fails
put the command in a function in .zshenv or .zshrc, then create a CRON job launching the function -> fails
I also tried setting up a simple script:
#!/usr/bin/zsh
while :
do
docker exec mosquitto mosquitto_pub -h localhost -p 1883 -u LOGIN -P PASSWORD -t rtorrent_ntorrents -m "{\"ntorrents\": $(docker exec -it box rtxmlrpc downl$
sleep 60
done
Which fails this way:
[1] 8665
[1] + 8665 suspended (tty output) ./ntorrents
The only way I found to use my command in a background process is screen...
Of course, running the command itself in a shell produces the desired result.
Thanks in advance for your help.
As mentioned in the comments, removing the -it parameter solves the issue.
I'm using Docker for Windows:
Docker version 18.03.1-ce-win64
Docker Engine 18.03.1-ce
ClickHouse client version 1.1.54380
ClickHouse server version 1.1.54380
For exporting data from table into CSV format I am using the command:
Now run clickhouse-client container for export
$ docker run -it --rm --link clickhouse-server:clickhouse-client yandex/clickhouse-client -m --query="select * from default.table1 FORMAT CSV" > C:/Users/sony/Desktop/table1_data.csv --host clickhouse-server
NOTE: The above command works perfectly.
Now run clickhouse-client container for import
$ docker run -it --rm --link clickhouse-server:clickhouse-client yandex/clickhouse-client -m -c "cat C:/Users/sony/Desktop/table1_data.csv | clickhouse-client --host clickhouse-server --query='INSERT INTO default.table1 FORMAT CSV' "
Could you please tell me what I am doing wrong when importing?
Thanks in advance.
I think you should mount the csv file inside the container first. To mount the file you should add -v C:/Users/sony/Desktop/table1_data.csv :~/table1_data.csv option on your docker command. So your docker run command should be like this:
$ docker run -it --rm --link clickhouse-server:clickhouse-client yandex/clickhouse-client -m -v C:/Users/sony/Desktop/table1_data.csv:~/table1_data.csv -c "cat ~/table1_data.csv | clickhouse-client --host clickhouse-server --query='INSERT INTO default.table1 FORMAT CSV'"
Edit
My bad. Mounting inside the file wont work. Try this instead:
cat path_to_file/table1_data.csv | docker run -i --rm --link clickhouse-server:clickhouse-client yandex/clickhouse-client -m --host clickhouse-server --query="INSERT INTO default.table1 FORMAT CSV"
Already tried on linux, and it works. Since cat not works on Windows, I found type has same functionality, honestly haven't try it:
`type C:/Users/sony/Desktop/table1_data.csv | docker run -i --rm --link clickhouse-server:clickhouse-client yandex/clickhouse-client -m --host clickhouse-server --query="INSERT INTO default.table1 FORMAT CSV"`
Hope it works.
I am trying to run mysql through docker and scripts. Everything works fine, it runs it creates users and database. I can connect to it using workbench, but it does not run dump.sql
I get following error
Access denied for user 'userxx'#'localhost' (using password: YES)
But if I run script once again, it will connect and it will run dump.sql
Here is my script:
#!/bin/sh
echo "Starting DB..."
docker run --name test_db -d \
-e MYSQL_ROOT_PASSWORD=test2018 \
-e MYSQL_DATABASE=test -e MYSQL_USER=test_user -e MYSQL_PASSWORD=test2018 \
-p 3306:3306 \
mysql:latest
# Wait for the database service to start up.
echo "Waiting for DB to start up..."
docker exec test_db mysqladmin --silent --wait=30 -utest_user -ptest2018 ping || exit 1
# Run the setup script.
echo "Setting up initial data..."
docker exec -i test_db mysql -utest_user -ptest2018 test < dump.sql
What am I doing wrong ? Or is there way to run dump through Dockerfile, Since I couldnt manage to do it?
If you run docker run --help, you will see these flags
--health-cmd string Command to run to check health
--health-interval duration Time between running the check (ms|s|m|h) (default 0s)
--health-retries int Consecutive failures needed to report unhealthy
--health-start-period duration Start period for the container to initialize before starting health-retries countdown (ms|s|m|h) (default 0s)
--health-timeout duration Maximum time to allow one check to run (ms|s|m|h) (default 0s)
So, you can use these command to check health with provided command. In your case, that command is
mysqladmin --silent -utest_user -ptest2018 ping
Now run as bellow
docker run --name test_db -d \
-e MYSQL_ROOT_PASSWORD=test2018 \
-e MYSQL_DATABASE=test -e MYSQL_USER=test_user -e MYSQL_PASSWORD=test2018 \
-p 3306:3306 \
--health-cmd="mysqladmin --silent -utest_user -ptest2018 ping" \
--health-interval="10s" \
--health-retries=6 \
mysql:latest
If you run docker ps, you will see
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d1d160ed7de mysql:latest "docker-entrypoint.s…" 16 seconds ago Up 16 seconds (health: starting) 0.0.0.0:3306->3306/tcp test_db
You will see health: starting in Status.
Finally you can use this to wait. When your mysql is ready, health will be healthy.
So modify your script as below
#!/bin/bash
docker run --name test_db -d \
-e MYSQL_ROOT_PASSWORD=test2018 \
-e MYSQL_DATABASE=test -e MYSQL_USER=test_user -e MYSQL_PASSWORD=test2018 \
-p 3306:3306 \
--health-cmd="mysqladmin --silent -utest_user -ptest2018 ping" \
--health-interval="10s" \
--health-retries=6 \
mysql:latest
# Wait for the database service to start up.
echo "Waiting for DB to start up..."
until [ $(docker inspect test_db --format '{{.State.Health.Status}}') == "healthy" ]
do
sleep 10
done
# Run the setup script.
echo "Setting up initial data..."
docker exec -i test_db mysql -utest_user -ptest2018 test < dump.sql
Here, following command returns health status
docker inspect test_db --format '{{.State.Health.Status}}'
Wait until it returns healthy.
Note: I have used #!/bin/bash in script
Instead of executing a dump, better mount your sql file to the init directory of MySQL’s image: /docker-entrypoint-initdb.d in the run command.
Simply add the switch:
-v $PWD/dump.sql:/docker-entrypoint-initdb.d/dump.sql
Pro tip :) don’t use latest tag. Always use stable, specific tag like 5, 5.7, 8 etc.
I am running oracle-xe-11g on rancher os. I want to take the data backup of my DB. When I tried with the command
docker exec -it $Container_Name /bin/bash
then I entered:
exp userid=username/password file=test.dmp
It is working fine, and it created the test.dump file.
But I want to run the command with the docker exec command itself. When I tried this command:
docker exec $Container_Name sh -C exp userid=username/password file=test.dmp
I am getting this error message: sh: 0: Can't open exp.
The problem is:
When running bash with -c switch it is not running as interactive or a login shell so bash won't read the same startup scripts. Anything set in /etc/profile, ~/.bash_profile, ~/.bash_login, or ~/.profile would be skipped.
Workaround:
run your container with following command:
sudo docker run -d --name Oracle-DB -p 49160:22 -p 1521:1521 -e ORACLE_ALLOW_REMOTE=true -e ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe -e PATH=/u01/app/oracle/product/11.2.0/xe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -e ORACLE_SID=XE -e SHLVL=1 wnameless/oracle-xe-11g
What I'm doing is specifying the environment variables set in the container using docker.
Now for generating the backup file:
sudo docker exec -it e0e6a0d3e6a9 /bin/bash -c "exp userid=system/oracle file=/test.dmp"
Please note the file will be created inside the container, so you need to copy it to docker host via docker cp command
This is how I did it. Mount a volume to the container e.g. /share/backups/ then execute:
docker exec -it oracle /bin/bash -c "ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe ORACLE_SID=XE /u01/app/oracle/product/11.2.0/xe/bin/exp userid=<username>/<password> owner=<owner> file=/share/backups/$(date +"%Y%m%d")_backup.dmp"