I am trying to insert few CSV file data into Aurora Postgres DB.
Tried out various COPY commands but none of them work. Is it not achievable ?
Is there a way in which I can do this without downloading the file or using the datapipelines ?
Related
I have an S3 bucket containing a bunch of data in the format of a Ruby Hash. What I'd like to do upon running rails s is to have the data retrieved from the S3 bucket and use that to seed the database. The data in the S3 bucket will always be changing and the Rails app is running inside of a container so I'm going to need to seed the DB before every run. How do I seed a database from an S3 bucket?
Hi I have a ruby on rails app hosted on AWS EC2 and it is using mysql3 as database.Now I have to take backup of the database to my local machine.
There are two ways to take backup.
Using mysql workbench UI tool connect you database via ssh tunnel.
Connect to the AWS EC2 and take backup there itself and copy the backup file using scp command.
Hope this might help you.
I did the same with a DigitalOcean application with PostgreSql, to do that, this is what I did, for that you will need a ssh connection, everywhere (DigitalOcean... and probably Amazon) explains how to do that
In the server (AWS in you case):
make a cron to execute daily a script to make a backup of the database
crontrb -e
and add, to perform a copy every day at 23:00
23 * * * sh /home/rails/backup/backup_dump.sh
create /home/rails/backup/backup_dump.sh:
NOW=$(date +"%d")
FILE="app_production_$NOW.sql"
pg_dump -U rails -w app_production > /home/rails/backup/$FILE
of course pg_dump is what I use to make a backup of my PostgreSql database, in you case with MySQL will need another
In your local machine:
Add in /etc/cron.daily directory the script file that contains the recover from AWS backup file, and populate:
NOW=$(date +"%d") # date - 1... the day before, don't remeber the script sintax
scp -r root#ip_server:/home/rails/backup/app_production_$NOW.sql /local_machine/user/local_backups
And that's all, I hope will help you
I am now working on Cloud 9 and need to see my PostgreSQL database. So I hope to know the below two methods.
How to create dump file from PostgreSQL database when you know database name in Cloud 9?
Which tool can I use to see the tables of PostgreSQL database dump?
Use pg_dump. It looks like Cloud9 doesn't permit remote PostgreSQL connections, so you'll need to run it within the workspace, then transfer the dump file to your computer. Use pg_dump -Fc to make a PostgreSQL-custom-format dump, which is the recommended format.
To list tables in the dump use pg_restore -l to list the dump's table of contents. Generally it's easier to work with dumps by restoring them to a local PostgreSQL install (again, using pg_restore) though.
For more detail see the PostgreSQL manual
Our server ran into a file limit issue with carrierwave. Over 36000 files. We are now going to move to S3.
Is there a way to migrate the files over to S3? When we launched the code on production none of the images showed up and there was a duh moment. It's trying to grab the files from s3 when they are locally stored on the server still.
How do we migrate the files over?
You can upload the files to s3 via the s3 console in the s3 file manager. Or by using a plugin such as S3Fox for FireFox. You'll just need to make sure the pathing and the s3 bucket are such that Carrierwave will know how to point to the image via the right set of subfolders, etc.
I have my own mysql data base and have some data in it.. Now I would like to dump my database into the engineyard cloud.
I have cloud account and diployed the complete file init.. Now we need the data to run my project..
so could you please explain how to dump the db into engineyard cloud...
Ramesh
I am not sure about EngineYard , But on Heroku you can issue heroku db:push command , which Pushes your local DB into the Cloud . I guess for EngineYard , a similar command should work .
http://blog.heroku.com/archives/2009/3/18/push_and_pull_databases_to_and_from_heroku/