How to do Backup and Restore in PostgreSQL database on Linux server - postgresql-10

I have a requirement of backup and restore PostgreSQL database from Dev server to QA server. Database size is 1TB.
Is there is any approach to restore directly from Dev server to QA server with out creating intermediate file?

Replicate Dev server data on QA server using the pg_basebackup command.
pg_basebackup -h Dev_Server_host_name -D /QA_Server_Data_Directory/ -P -v

Related

How to restore influxdb from local backup || InfluxDB

I am trying to import client's provided influx db data backup to my local influx db. But getting following error.
ENV: Ubuntu
Influx DB service is already running.
enter image description here
Trying to restore influx db backup.
First of all you are using the wrong command to execute influxdb task. It should be influxd instead of influx. Please follow the steps below to restore your database.
Check your data-dir path by executing: influxd config. Copy the data-dir section.
See database name by executing: SHOW Databases
Execute to restore: influxd restore -database {database_name} -data-dir {your_data_dir_path} /path/to/your/backup

Magento, database dump

I am trying to get db dump by command
docker exec container-name sh -c 'exec mysqldump --all-databases -uroot -p""' > db-backups/some-dump-name.sql
and I am getting
Got error: 2002: "Can't connect to local MySQL server through socket '/opt/bitn
ami/mysql/tmp/mysql.sock' (2)" when trying to connect
Magento runs on this image. Any ideas what could be wrong? I can provide more details if needed.
Bitnami Engineer here,
You also need to set the hostname of the database when backing up the databases. The Magento container doesn't include a database server, it uses an external one.
You probably specified that using the MARIADB_HOST env variable. If you used the docker-compose.yml file we provide, that hostname is mariadb.
exec mysqldump --all-databases -uroot -h HOSTNAME -p""

What is Heroku PG:PULL back-end?

How to reproduce
heroku pg:pull
in my own environment.
(Clone production db to local)
Is it possible to clone db from production to local machine if production configured to use AWS as database?
Maybe
mysqldump -u root -h heroku_database_host -p heroku_database_name - > backup.sql

How to persist configuration & analytics across container invocations in Sonarqube docker image

Sonarqube official docker image, is not persisting any configuration changes like: creating users, changing root password or even installing new plugins.
Once the container is restarted, all the configuration changes disappear and the installed plugins are lost. Even the projects' keys and their previous QA analytics data is unavailable after a restart.
How can we persist the data when using Sonarqube's official docker image?
Sonarqube image comes with a temporary h2 database engine which is not recommended for production and doesn't persist across container restarts.
We need to setup a database of our own and point it to Sonarqube at the time of starting the container.
Sonarqube docker images exposes two volumes "$SONARQUBE_HOME/data", "$SONARQUBE_HOME/extensions" as seen from Sonarqube Dockerfile.
Since we wanted to persist the data across invocations, we need to make sure that a production grade database is setup and is linked to Sonarqube and the extensions directory is created and mounted as volume on the host machine so that all the downloaded plugins are available across container invocations and can be used by multiple containers (if required).
Database Setup:
create database sonar;
grant all on sonar.* to `sonar`#`%` identified by "SOME_PASSWORD";
flush privileges;
# since we do not know the containers IP before hand, we use '%' for sonarqube host IP.
It is not necessary to create tables, Sonarqube creates them if it doesn't find them.
Starting up Sonarqube container:
# create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
# Start the container
docker run -d \
--name sonarqube \
-p 9000:9000 \
-e SONARQUBE_JDBC_USERNAME=sonar \
-e SONARQUBE_JDBC_PASSWORD=SOME_PASSWORD \
-e SONARQUBE_JDBC_URL="jdbc:mysql://HOST_IP_OF_DB_SERVER:PORT/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance" \
-v /server_data/sonarqube/data:/opt/sonarqube/data \
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions \
sonarqube
Hi #VanagaS and others landing here.
I just wanted to provide an alternative to the above. Maybe some would even consider it an easier one.
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Since Sonarqube v7.9 , Mysql is not supported. One needs to use postgresql. Install Postgresql and configure to run on host ip rather than localhost, private ip is preferred.
Reference: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04
postgres=# create database sonar;
postgres=# create user sonar with encrypted password 'mypass';
postgres=# grant all privileges on database sonar to sonar;
create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
Start the container
docker run -d
--name sonarqube
-p 9000:9000
-e SONARQUBE_JDBC_USERNAME=sonar
-e SONARQUBE_JDBC_PASSWORD=mypass
-e SONARQUBE_JDBC_URL=jdbc:postgresql://{host/private ip only}:5432/sonar
-v /server_data/sonarqube/data:/opt/sonarqube/data
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions
sonarqube
You may face this error when you do "docker logs container_id"
ERROR: [1] bootstrap checks failed [1]: max virtual memory areas
vm.max_map_count [65530] is too low, increase to at least [262144]
This is the fix, run on your host
sysctl -w vm.max_map_count=262144
In order to add hostname
edit /etc/postgresql/10/main/postgresql.conf
In order to add docker as client for postgres edit /etc/postgresql/10/main/pg_hba.conf
10 - postgres version used

Installing PostgreSQL within a docker container

I've been following several different tutorials as well as the official one however whenever I try to install PostgreSQL within a container I get the following message afterwards
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've looked through several questions here on SO and throughout the internet but no luck.
The problem is that the your application/project is trying to access the postgres socket file in the HOST machine (not docker container).
To solve it one would either have to explicitly ask for an tcp/ip connection while using the -p flag to set up a port for the postgres container, or share the unix socket with the HOST maching using the -v flag.
:NOTE:
Using the -v or --volume= flag means you are sharing some space between the HOST machine and the docker container. That means that if you have postgres installed on your host machine and its running you will probably run into issues.
Below I demonstrate how to run a postgres container that is both accessible from tcp/ip and unix socket. Also I am naming the container as postgres.
docker run -p 5432:5432 -v /var/run/postgresql:/var/run/postgresql -d --name postgres postgres
There are other solutions, but I find this one the most suitable. Finally if the application/project that needs access is also a container, it is better to just link them.
By default psql is trying to connect to server using UNIX socket. That's why we see /var/run/postgresql/.s.PGSQL.5432- a location of UNIX-socket descriptor.
If you run postgresql-server in docker with port binding so you have to tell psql to use TCP-socket. Just add host param (--host or -h):
psql -h localhost [any other params]
UPD. Or share UNIX socket descriptor with host (where psql will be started) as was shown in main answer. But I prefer to use TCP socket as easy managed approach.
FROM postgres:9.6
RUN apt-get update && apt-get install -q -y postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 postgresql-client-common postgresql-common
RUN echo postgres:postgres | chpasswd
RUN pg_createcluster 9.6 main --start
RUN /etc/init.d/postgresql start
RUN su -c "psql -c \"ALTER USER postgres PASSWORD 'postgres';\"" postgres
Here are instructions for fixing that error that should also work for your docker container: PostgreSQL error 'Could not connect to server: No such file or directory'
If that doesn't work for any reason, there are many of off-the-shelf postgresql docker containers you can look at for reference on the Docker Index: https://index.docker.io/search?q=postgresql
Many of the containers are built from trusted repos on github. So if you find one that seems like it meets your needs, you can review the source.
The Flynn project has also included a postgresql appliance that might be worth checking out: https://github.com/flynn/flynn-postgres
Run the below command to create a new container with PSQL running it it, which can be accessed from other containers/applications.
docker run --name postgresql-container -p 5432:5432 -e POSTGRES_PASSWORD=somePassword -d postgres
Now, export the connection-string or DB credentials from ur .env and use it in the application.
Refernce: detailed installion and running

Resources