I have a ruby on rails application hosted on heroku using postgresql as its database. Since the database is getting pretty large, I was wondering if there's a way to download only a specific part of it off of heroku. For example, is it possible to download only one specific table, or download only rows with parent_id == x.
In addition to Steve's quite correct answer, you also have the option of connecting using psql to the DATABASE_URL and using \copy, e.g.
$ psql "$(heroku config:get DATABASE_URL)"
mydb=> \copy mytable TO 'mytable.csv' WITH (FORMAT CSV, HEADER)
mydb=> \copy (SELECT col1, col2 FROM mytable2 WHERE ...) TO 'mytable2_partial.csv' WITH (FORMAT CSV, HEADER)
You can extract whole tables, or the output of arbitrary queries (including joins etc). The table definition (DDL) is not exported this way, but can be dumped with pg_dump --schema-only -t ....
Using the DATABASE_URL config setting you can use pg_dump to access your database and use the -t switch to specify a certain table.
For example, to export the table my_table into file called db.sql:
pg_dump -t my_table `heroku config:get DATABASE_URL` > db.sql
If you need to limit the download to certain rows then I don't think pg_dump will do the job on it's own. You could create another table in your Heroku database to first define the subset of rows that you want to download and then have pg_dump dump only that table. See this question for some ideas about how to do that: Export specific rows from a PostgreSQL table as INSERT SQL script
Related
I have a Project-A, and I'm starting Project-B. I want to use Project-A as a starting point. So I copied the files, but how can I duplicate the database? Thank you!
The exact command depends on what type of database you are copying from and too, also on whether you want to copy the structure only or the structure and the content.
A general way to do this would be to export the Project-A database into an SQL file, then run that SQL file through the project-B database. The SQL file can store the structure, or the content or both - you choose when you do the export.
Postgresql uses the command pg_dump to export to SQL. The accepted answer in the question linked to in jdgray's comment shows how the output of pg_dump can be piped directly into the second database so that no intermediate file is created.
To get your database
pg_dump -Fc mydb > db.dump
To restore it:
pg_restore -d <you_new_db_name> /db.dump
This is assuming you are going from pg to pg. All data and structure and relationships will come over with this. I would suggect using pgadmin4 to make the new db before hand so you can just import over to it. In your database.yml change the db name.
If you need addition stuff, like declaring which ip address your db is on use the -p flag. Here is the link to more flags (Postgres v 9.6):
Postgres Link
I just edited the db name from database.yml and ran rake db:create db:migrate
I've created a dump file of one table on my local db using pg_dump and want to import it to my production db on Heroku to replace the current production version of the table.
I was looking at these instructions: https://devcenter.heroku.com/articles/heroku-postgres-import-export#import-to-heroku-postgres
Should I use pg_restore or can I use heroku pg:backups:restore? Does using heroku pg:backups:restore drop the entire db and replace it with the contents of the dump file or will it only drop and replace what is in the dump file? I am concerned about data other than what's in the dump file being dropped (since my dump file is only one table). I will of course create a backup of production before doing this, but am curious what the best approach is. Thanks!
I am now working on Cloud 9 and need to see my PostgreSQL database. So I hope to know the below two methods.
How to create dump file from PostgreSQL database when you know database name in Cloud 9?
Which tool can I use to see the tables of PostgreSQL database dump?
Use pg_dump. It looks like Cloud9 doesn't permit remote PostgreSQL connections, so you'll need to run it within the workspace, then transfer the dump file to your computer. Use pg_dump -Fc to make a PostgreSQL-custom-format dump, which is the recommended format.
To list tables in the dump use pg_restore -l to list the dump's table of contents. Generally it's easier to work with dumps by restoring them to a local PostgreSQL install (again, using pg_restore) though.
For more detail see the PostgreSQL manual
I have an existing database on my server containing many tables with content. Now I have created a new database but some columns are added.
Is it possible to migrate all the data from the one database to the other.
Kind regards.
I've used the yaml_db gem to migrate DBs: https://github.com/ludicast/yaml_db - this gem adds some rake tasks that are helpful
After installing the gem, you can run rake db:data:dump to save your database to a .yml file.
Then, after changing your database configuration, you can run rake db:data:load to load the data into your new database.
I like your answer! But a more easy way is to dump the whole database like you said. But just transfer it to another server.
Like this:
To Dump:
pg_dump -U demo02 -h localhost -O demo02 > demo2.sql
To Restore:
psql -U demo3 demo3 < demo2.sql
This is probably something simple I'm missing. I'm working through the pragmatic bookshelf ruby on rails exercises in the AWDWR 4th edition.
Everything was going well and then I ran into the portion where you enter into the sqlite 3 command line tools to make sure it's capturing the order information.
When I try to run the select statement for orders, I get:
sqlite> select * from orders;
SQL error: no such table: orders
Then I tried listing all the tables:
sqlite> .tables
sqlite>
I get to the sqlite command line per the instructions in the book:
sqlite3 -line
Is there something simple I'm missing here?
Thanks.
You need to specify a database filename on your sqlite3 command line. Usually*, if you do not give a database filename, then it will start out operating on an empty, temporary, in-memory database.
*
The version I have at hand (sqlite3 3.7.2) actually takes -line as the database filename if there are no additional arguments. This means that I end up with a file named -line; this file can be deleted with rm ./-line.
You probably want this (run from the root directory of your application):
sqlite3 -line db/development.sqlite3
If your project is using Rails 3, then you can use this:
rails db
If you need the -line behavior, you can use .mode line at the sqlite3 command line.
If you want to access the DB for a non-default environment, just append the environment name:
rails db staging
You can also add in -p if you want to automatically use the username and password from your configuration (sqlite3 does not need a username or password since it uses plain Unix permissions):
rails db -p production
To display all the tables in the sqlite:
>select * from sqlite_master
I had a similar problem (not getting anything back), but I'm using Windows, and it seems to have a problem when I use the drives with the path (c:\ or d:). I was able to solve it by and launching sqllite from the db path, and using only the file name, like this:
C:\mydir\sqlite3.exe -line mydb.db
.tables
or
C:\mydir\sqlite3.exe
.open mydb.db
or
C:\mydir\sqlite3.exe
ATTACH "mydb.db" AS db1;
To display a table:
select * from mytable;
or
select * from db1.mytable;
Go into the db folder by terminal and type
$ sqlite3 development.sqlite3
SQLite version 3.7.7 2011-06-25 16:35:41
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .table
and it will show the tables you have made.