I try to translate this command with Powershell Core
docker run --rm -e PGPASSWORD=$Env:PGPASSWORD postgres:14.2 pg_restore --clean -d db -h mydb.com -U sa --format=custom < pg.dump
And I have this error
The '<' operator is reserved for future use.
I've tried many things like
echo pg.dump | docker run --rm -e PGPASSWORD=$Env:PGPASSWORD postgres:14.2 pg_restore --clean -d db -h mydb.com -U sa --format=custom
pg_restore: error: could not read from input file: end of file
echo pg.dump | docker run --rm -e -it PGPASSWORD=$Env:PGPASSWORD postgres:14.2 pg_restore --clean -d db -h mydb.com -U sa --format=custom
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
I can't find how to use the stdin operator with Powershell Core
Try to change sequence of your actions. Run container -> Copy backup file into container -> Run command pg_restore
Here is an algorithm (not the working code) to execute in powershell. Note that docker cp may work only with container id. You can get it if you parse it via docker ps | Select-String -Pattern "my_postgres"
Set-Location "C:\postgres_backup"
Write-Output "Starting postgres container"
docker run --rm --name=my_postgres --env PGPASSWORD=$Env:PGPASSWORD postgres:14.2
Write-Output "Copying backup to container"
docker cp ./pg.dump my_postgres:/pg.dump
Write-Output "Execute something in postgres container"
docker exec -i $my_postgres pg_restore --clean -d db -h mydb.com -U sa --format=custom -1 pg.dump
Related
I've written a script to backup my docker mysql containers:
export $(grep -v '^#' .env | xargs -d '\n')
filename=$(date +'%Y%m%d_%H%M%S')
docker-compose exec mysql bash -c "mysqldump --user=$MYSQL_USERNAME --password='$MYSQL_PASSWORD' --ignore-table=$MYSQL_DATABASE.forums_readData_forums_c --ignore-table=$MYSQL_DATABASE.forums_readData_newPosts $MYSQL_DATABASE | gzip > /tmp/$filename.gz"
mysql_container=$(docker ps | grep -E 'mysql' | awk '{ print $1 }')
docker cp $mysql_container:/tmp/$filename.gz $BACKUP_DIR/mysql/
docker-compose exec mysql rm /tmp/$filename.gz
sudo find $BACKUP_DIR/mysql/* -mtime +30 -exec rm {} \;
But when I add it to the crontab, I get the error the input device is not a TTY. That's coming from the docker-compose exec command, except there's no -it flag? When I run this script directly from the shell ./backup.sh, it works fine.
I'm trying to automate a postgres database backup which is running in docker,
#!/bin/bash
POSTGRES_USERNAME="XXX"
CONTAINER_NAME="YYY"
DB_NAME="ZZZ"
BACKUP="/tmp/backup/${DB_NAME}.sql.gz"
docker exec -it -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} |
gzip -c > ${BACKUP}
exit
if i run this manually, i can get the backup, but if i schedule the script into cronjob means, i got empty folder,
can anyone please help me ?
Hello everyone thank you so much for your immediate response, i think
i have fixed my issue.
Cronjob failed due to --interactive mode
docker exec -it -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} | gzip -c > ${BACKUP}
Removed i --interactive from shell script, then it's works perfect.
docker exec -t -u ${POSTGRES_USERNAME} ${CONTAINER_NAME} pg_dump -d ${DB_NAME} | gzip -c > ${BACKUP}
docker exec -it CONTAINER_ID psql -U postgres -d DB -c "select 1" > /dev/null 2>&1
How do I get the output of select 1 in the terminal?
Try this:
docker exec CONTAINER_ID bash -c "psql -U postgres -d DB -c \"select 1\" "
I'm trying to dump a remote database into a local docker container's database.
$ docker exec -i my_container \
mysqldump my_remote_database -h my_remote_host.me -u my_remote_user -p
This gives me the remote dump well
So here are my attempts:
$ docker exec -i my_container \
mysqldump my_remote_database -h my_remote_host.me -u my_remote_user -p \
| docker exec -i my_container mysql -u my_local_user -pmy_local_password \
-D my_local_database
$ docker exec -i my_container bash -c \
"mysqldump my_remote_database -h my_remote_host.pp -u my_remote_user -p \
| mysql -u my_local_user -pmy_local_password -D my_local_database"
Both don't seem to have any effect on the local database (no error though)
How can I transfer these data ?
I always like to hammer out problems from inside the container in an interactive terminal.
First, get the cantainer of image running and check to see the following:
If you bash onto the local container docker exec -ti my_container bash, does the remote hostname my_remote_host.me resolve and can you route to it? Use ping or nslookup.
From inside the interactive terminal bash shell can you connect to the remote db? Just try a vanilla mysql cli connect using the credentials
Finally try the dump from inside the interactive terminal and just create a mydump.sql dump file.
Then check that inside the container:
You can connect to the local DB the credetials provided (if using tcp connection not socket the hostname should resolve, but it looks like your local db is using a socket)
See if you can load the dump file using mysql -u my_local_user -pmy_local_password -D mylocaldb < mydump.sql
If this all works then you can start looking at why the pipe is failing, but I suspect the issue may be with one of the resolutions.
I notice you say local database. Is the 'local database' inside the container or on the Docker host running a socket connection?
I am following this tutorial for service discovery http://jasonwilder.com/blog/2014/07/15/docker-service-discovery
Briefly:
I created an etcd host running at x.y.z.d:4001
docker run -d --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd
Created a backend server running a container at backend_serverip:8000 and docker-register
$ docker run -d -p 8000:8000 --name whoami -t jwilder/whoami
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST=x.y.z.d:4001 -v /var/run/docker.sock:/var/run/docker.sock -t jwilder/docker-register
Created another backend server running a container at backend2_serverip:8000 and docker-register
$ docker run -d -p 8000:8000 --name whoami -t jwilder/whoami
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST=x.y.z.d:4001 -v /var/run/docker.sock:/var/run/docker.sock -t jwilder/docker-register
Created a client running docker-discover and an ubuntu image
$ docker run -d --net host --name docker-discover -e ETCD_HOST=10.170.71.226:4001 -p 127.0.0.1:1936:1936 -t jwilder/docker-discover
When I look at the logs to see if containers are being registered I see teh folowing error
2015/07/09 19:28:00 error running notify command: python /tmp/register.py, exit status 1
2015/07/09 19:28:00 Traceback (most recent call last):
File "/tmp/register.py", line 22, in <module>
backends = client.read("/backends")
File "/usr/local/lib/python2.7/dist-packages/etcd/client.py", line 347, in read
self.key_endpoint + key, self._MGET, params=params, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/etcd/client.py", line 587, in api_execute
return self._handle_server_response(response)
File "/usr/local/lib/python2.7/dist-packages/etcd/client.py", line 603, in _handle_ser
etcd.EtcdError.handle(**r)
File "/usr/local/lib/python2.7/dist-packages/etcd/__init__.py", line 184, in handle
raise exc(msg, payload)
etcd.EtcdKeyNotFound: Key not found : /backends
I tried manually creating this directory , I also tried running the containers with privileged option but no luck
The error you are getting is from a bug in the code. The problem is that /backends does not exist in your etcd directory. You can create it yourself by manually by running this:
curl -L http://127.0.0.1:4001/v2/keys/backends -XPUT -d dir=true
Once the directory exists in etcd, you won't get the error anymore.
I created a pull request that fixes the bug and if you want to use the fixed code, you can build your own image:
git clone git#github.com:rca/docker-register.git
cd docker-register
docker build -t docker-register .
Then your command for docker register would look like:
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST=x.y.z.d:4001 -v /var/run/docker.sock:/var/run/docker.sock -t docker-register
Note I simply removed jwilder/ from the image name in the command so it uses your local version.