restore mongodb database .bson and .json files - ruby-on-rails

In this folder called my_backup I have a mongodb database dump with all my models/collections for example:
admins.bson
admins.metadata.json
categories.bson
categories.metadata.json
pages.bson
pages.metadata.json
.
.
.
I have a database called ubuntu_development on mongodb. I am working with rails 3 + mongoid
How can I import/restore all models/collections from the folder my_backup to my database ubuntu_development
Thank you very much!

Execute this command from the console (in this case):
mongorestore my_backup --db ubuntu_development
mongodbrestore is followed by my_backup, which is the folder name where the previous dump of the database is saved.
--db ubuntu_development specifies the database name where we want to restore the data.

To import .bson files
mongorestore -d db_name -c collection_name path/file.bson
Incase only for a single collection.Try this:
mongorestore --drop -d db_name -c collection_name path/file.bson
To import .json files
mongoimport --db db_name --collection collection_name --file name.json

You have to run this mongorestore command via cmd and not on Mongo Shell... Have a look at below command on...
Run this command on cmd (not on Mongo shell)
>path\to\mongorestore.exe -d dbname -c collection_name path\to\same\collection.bson
Here path\to\mongorestore.exe is path of mongorestore.exe inside bin folder of mongodb. dbname is name of databse. collection_name is name of collection.bson. path\to\same\collection.bson is the path up to that collection.
Now from mongo shell you can verify that database is created or not (If it does not exist, database with same name will be created with collection).

Related

Restore a SQL Server DB.bak in a Dockerfile

I am running a .NET Razor application, an instance of gitea, and a SQL Server database each in separate containers that communicate with one another. I would like to start my database image with a database schema and data (by restoring a .bak file).
I can do this with my current Dockerfile, if once it is up and running, I run these additional commands:
docker exec -it myContainer /opt/mssql-tools/bin/sqlsmd -S localhost -U sa -P myPassword
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword -Q "RESTORE DATABASE MY_DB_NAME FROM DISK='/var/opt/mssql/backup/MY_DB_NAME.bak' WITH MOVE 'MY_DB_NAME_TEST' TO '/var/opt/mssql/data/MY_DB_NAME_TEST.mdf', MOVE 'MY_DB_NAME_TEST_log' TO '/var/opt/mssql/data/MY_DB_NAME_TEST_log.ldf'"
This gets the job done, but I want to fully automate the process so that this is configured 100% by my docker-compose.yml and Dockerfile so I need only type: docker-compose up -d.
I don't think the content of my docker-compose.yml file is relevant, but here is my Dockerfile (where I am trying to run that script that I currently need to run after docker-compose up):
FROM microsoft/mssql-server-linux
ENV SA_PASSWORD=myPassword
ENV ACCEPT_EULA=Y
COPY ./ACES_DB.bak /var/opt/mssql/backup/MY_DB_NAME.bak
RUN docker exec -it myContainer bin/sh /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword -Q "RESTORE DATABASE MY_DB_NAME FROM DISK='/var/opt/mssql/backup/MY_DB_NAME.bak' WITH MOVE 'MY_DB_NAME_TEST' TO '/var/opt/mssql/data/MY_DB_NAME_TEST.mdf', MOVE 'MY_DB_NAME_TEST_log' TO '/var/opt/mssql/data/MY_DB_NAME_TEST_log.ldf'"
Any help would be much appreciated.
A friend and I puzzled through this together and eventually found this solution. Here's what the docker file looks like:
FROM microsoft/mssql-server-linux
ENV MSSQL_SA_PASSWORD=myPassword
ENV ACCEPT_EULA=Y
COPY ./My_DB.bak /var/opt/mssql/backup/My_DB.bak
COPY restore.sql restore.sql
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Starting database restore" && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'myPassword' -d master -i restore.sql
*Note that I moved the SQL restore statement to a .sql file.
Expanding on #joshua-abbott 's answer. Here is my setup for restoring multiple DB to mssql 2019 docker image, and replacing the 'default' password used to restore the DB.
Dockerfile
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV DEFAULT_MSSQL_SA_PASSWORD=myStrongDefaultPassword
ENV ACCEPT_EULA=Y
USER root
COPY restore-db.sh entrypoint.sh /opt/mssql/bin/
RUN chmod +x /opt/mssql/bin/restore-db.sh /opt/mssql/bin/entrypoint.sh
ADD data.tar.gz /var/opt/mssql/
RUN chown -R mssql:root /var/opt/mssql/data && \
chmod 0755 /var/opt/mssql/data && \
chmod -R 0650 /var/opt/mssql/data/*
USER mssql
RUN /opt/mssql/bin/restore-db.sh
CMD [ "/opt/mssql/bin/sqlservr" ]
ENTRYPOINT [ "/opt/mssql/bin/entrypoint.sh" ]
restore-db.sh
#!/bin/bash
export MSSQL_SA_PASSWORD=$DEFAULT_MSSQL_SA_PASSWORD
(/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Server is listening on" && sleep 2
for restoreFile in /var/opt/mssql/data/*.bak
do
fileName=${restoreFile##*/}
base=${fileName%.bak}
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $MSSQL_SA_PASSWORD -Q "RESTORE DATABASE [$base] FROM DISK = '$restoreFile'"
rm -rf $restoreFile
done
entrypoint.sh
#!/bin/bash
/opt/mssql-tools/bin/sqlcmd \
-l 60 \
-S localhost -U SA -P "$DEFAULT_MSSQL_SA_PASSWORD" \
-Q "ALTER LOGIN SA WITH PASSWORD='${MSSQL_SA_PASSWORD}'" &
/opt/mssql/bin/permissions_check.sh "$#"
I voted for the answer of #Joshua Abbott , but I needed to customize the answer to match the question i.e. to restore from .bak file as it was required:
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=xxxxxxxx
ENV MSSQL_PID=Developer
ENV MSSQL_TCP_PORT=1433
WORKDIR /src
COPY ["API/db/db.bak", "dbbackups/"]
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Starting database restore" && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'xxxxxxxx' -Q "RESTORE FILELISTONLY FROM DISK='/dbbackups/db.bak';"
just you need to change xxxxxxx with your password, you can name your container as you want using the docker compose file/override files
It is simple, I use SQL Server Management Studio, when you create your DOCKER you declare a var for the directory, just put de Backup there and then you just restore it on your SQL
You can create a stored procedure in one of your databases for creating an automatic backup, I found this an made some adaptations for my use.
------ If you create this and then execute it------
CREATE PROCEDURE [dbo].[P_M_Backup]
AS
DECLARE #name VARCHAR(50) -- database name
DECLARE #path VARCHAR(256) -- path for backup files
DECLARE #fileName VARCHAR(256) -- filename for backup
DECLARE #fileDate VARCHAR(20) -- used for file name
-- specify database backup directory
SET #path = '/var/opt/mssql/data/Backup/'
-- specify filename format
SELECT #fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
DECLARE db_cursor CURSOR READ_ONLY FOR
SELECT name
FROM master.sys.databases
WHERE name NOT IN ('master', 'model', 'msdb', 'tempdb', 'Eikon_CDEEE') -- exclude these databases
AND state = 0 -- database is online
AND is_in_standby = 0 -- database is not read only for log shipping
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
SET #fileName = #path + #name + '_' + #fileDate + '.BAK'
BACKUP DATABASE #name TO DISK = #fileName
FETCH NEXT FROM db_cursor INTO #name
END
CLOSE db_cursor
DEALLOCATE db_cursor
/** SET #path = '/var/opt/mssql/data/Backup/' the mssql/data/ is my directory where I have mounted the SQL Server from Docker, and Backup is a directory inside this directory, so you have to change it for your directory**/

delete volumes from images

When I create a container from docker-compose with some volumes and then commit that container, the volumes in the docker-compose file will be committed too. There is a way to not commit the volumes in the image?
With below command just can add volume but not delete them:
docker commit -c 'VOLUME /foo' container_name image_name
Thank you.
Update (April 2018): In "How can I edit an existing docker image metadata?", Guido U. Draheim proposes gdraheim/docker-copyedit, a python scripts which can edit docker image metadata.
That can remove or overrides image metadata, including volumes.
The command would be:
./docker-copyedit.py FROM image1 INTO image2 REMOVE ALL VOLUMES
Since 2018, the same issue now includes (from Aalex Gabi):
For building a CI image with an embedded MySQL database snapshot I ended up using this solution: "Persist & share dev data in a Docker image with commit" from Steven Landow.
FROM mysql:5.7
ADD snapshots/default.sql /tmp/default.sql
# Using separate data folder outside of mysql image declared volume
# https://github.com/moby/moby/issues/3465
# https://medium.com/#stevenlandow/persist-share-dev-mysql-data-in-a-docker-image-with-commit-f9aa9910be0a
RUN mkdir /var/lib/mysql-no-volume
RUN set -exu ;\
MYSQL_ROOT_PASSWORD=root docker-entrypoint.sh --datadir /var/lib/mysql-no-volume &\
MYSQL_PID=$! &&\
timeout 22 bash -c 'until printf "" 2>>/dev/null >>/dev/tcp/$0/$1; do sleep 1; done' localhost 3306 &&\
mysql -proot -e 'create database `mydb` collate "utf8mb4_general_ci"' &&\
mysql -proot mydb < /tmp/default.sql &&\
kill $MYSQL_PID &&\
tail --pid=$MYSQL_PID -f /dev/null # Using tail to wait for PID to end https://unix.stackexchange.com/questions/427115/listen-for-exit-of-process-given-pid
# Using separate data folder outside of mysql image declared volume
# https://github.com/moby/moby/issues/3465
# https://medium.com/#stevenlandow/persist-share-dev-mysql-data-in-a-docker-image-with-commit-f9aa9910be0a
CMD ["--datadir", "/var/lib/mysql-no-volume"]
It seems that this is currently not possible, though there are many people requesting the feature and someone might be working on it. This Github issue discusses the topic:
https://github.com/moby/moby/issues/3465

How to create a Dockerfile for cassandra (or any database) that includes a schema?

I would like to create a dockerfile that builds a Cassandra image with a keyspace and schema already there when the image starts.
In general, how do you create a Dockerfile that will build an image that includes some step(s) that can't really be done until the container is running, at least the first time?
Right now, I have two steps: build the cassandra image from an existing cassandra Dockerfile that maps a volume with the CQL schema files into a temporary directory, and then run docker exec with cqlsh to import the schema after the image has been started as a container.
But that doesn't create an image with the schema - just a container. That container could be saved as an image, but that's cumbersome.
docker run --name $CASSANDRA_NAME -d \
-h $CASSANDRA_NAME \
-v $CASSANDRA_DATA_DIR:/data \
-v $CASSANDRA_DIR/target:/tmp/schema \
tobert/cassandra:2.1.7
then
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/create_keyspace.cql
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/schema01.cql
# etc
This works, but it makes it impossible to use with tools like Docker compose since linked containers/services will start up too and expect the schema to be in place.
I saw one attempt where the cassandra process as attempted to be started in the background in the Dockerfile during build, then cqlsh run, but I don't think that worked too well.
Ok I had this issue and someone advised me some strategy to deal with:
Start from an existing Cassandra Dockerfile, the official one for example
Remove the ENTRYPOINT stuff
Copy the schema (.cql) file and data (.csv) into the image and put it somewhere, /opt/data for example
create a shell script that will be used as the last command to start Cassandra
a. start cassandra with $CASSANDRA_HOME/bin/cassandra
b. IF there is a $CASSANDRA_HOME/data/data/your_keyspace-xxxx folder and it's not empty, do nothing more
c. Else
1. sleep some time to allow the server to listen on port 9042
2. when port 9042 is listening, execute the .cql script to load csv files
I found this procedure rather cumbersome but there seems to be no other way around. For Cassandra hands-on lab, I found it easier to create a VM image using Vagrant and Ansible.
Make a docker file Dockerfile_CAS:
FROM cassandra:latest
COPY ddl.cql docker-entrypoint-initdb.d/
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN ls -la *.sh; chmod +x *.sh; ls -la *.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["cassandra", "-f"]
edit docker-entrypoint.sh, add
for f in docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.cql) echo "$0: running $f" && until cqlsh -f "$f"; do >&2 echo "Cassandra is unavailable - sleeping"; sleep 2; done & ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
above exec "$#"
docker build -t suraj1287/cassandra -f Dockerfile_CAS .
and rebuild the image...
Another approach used by our team is create schema on server init.
Our java code test if exist the SCHEMA, if not (new environment, new deployment) create it.
Same for every new TABLE, automatic CREATE TABLE creates required new tables for new data entities when they run in any new cluster (other developer local, preproduction, production).
All this code is isolated inside our DataDriver classes for portability, in case we change Cassandra for another DB in some client or project.
This prevent a lot of hassle both for admins and for developers.
This approach is even valid for initial data loading, we use on tests.

Exporting mysql database using mysqldump including procedures

While exporting databases using mysqldump like this,
mysqldump -u mysqluser -p mysqlpassword databasename > /tmp/databasename.sql
Will this command also export stored procedures that listed using the following command,
SHOW PROCEDURE STATUS WHERE db = 'databasename';
If not, how to export mysql database using mysqldump along with its associated stored procedures from the Linux terminal? Also note that i cannot use phpMyAdmin for this purpose.
Try this.
mysqldump -u mysqluser -p mysqlpassword --routines databasename > /tmp/databasename.sql
Refer this link : http://www.ducea.com/2007/07/25/dumping-mysql-stored-procedures-functions-and-triggers/
We can use the -R flag as a substitute for --routines flag while dumping as the other answer suggest.
mysqldump -u mysqluser -p mysqlpassword -R databasename > /tmp/databasename.sql

How to backup/restore Rails db with Postgres?

I do the following on my server:
pg_dump -O -c register_production > register.sql
Then, after copying register.sql to my local environment, I try:
psql register_development < register.sql
This appears to work, but when I try to launch the Rails site locally, I get this:
PG::UndefinedTable: ERROR: relation "list_items" does not exist at character 28
How can I restore everything (including relations) from the server db to my local dev db?
I use this command to save my database:
pg_dump -F c -v -U postgres -h localhost <database_name> -f /tmp/<filename>.psql
And this to restore it:
pg_restore -c -C -F c -v -U postgres /tmp/<filename>.psql
This dumps the database in Postgres' custom format (-F c) which is compressed by default and allows for reordering of its contents. -C -c will drop the database if it exists already and then recreate it, helpful in your case. And -v specifies verbose so you can see exactly what's happening when this goes on.
Does the register_development database exist before you run the psql command? Because that form will not create it for you.
See http://www.postgresql.org/docs/8.1/static/backup.html#BACKUP-DUMP-RESTORE for more information.

Resources