Can psql commands be executed from Rails application - ruby-on-rails

I want to know whether psql commands be executed from an Application ?
For example I want to make use of \crosstabview functionality which psql provides. It is a great feature to have when viewing the reports.
I have an application which uses Ruby on Rails. I'm thinking whether I can run the \crosstabview from the application.

These are features of psql application only. There is no way to use them via database driver, be it ActiveRecord or anything else. That's just a different thing.
However, you can have table view using, for example, table_view gem.

Can psql commands be executed from Rails application?
It can theoretically be done by running psql from ruby. Its just a really clunky solution.
The psql --command option only takes a single function name or a SQL string that is parsable by the database and which cannot use psql specific functions. Which means that you can run for example %x{ psql --command "\\h"} but not \crosstableview which needs input.
That leaves using PTY to open an interactive session.
# example using PTY to connect to an interactive psql shell
require 'pty'
require 'expect'
PTY.spawn('psql') do |output, input, pid|
output.expect /\=\#/ do
input.puts '\\conninfo'
output.each do |line|
puts line
end
end
end
While possible there is a much better solution - use the crosstab function from the tablefunc extension which can be used with ActiveRecord::Base.connection.execute.

I managed to do this with "\ir" option provided by psql. By this option, you can run the commands written in a file. So put all commands that you want to run in CLI in a single file.
Now you just have to connect to psql and pass filename to "\i" option.
Example: I created a file "a.rb" and it inlcudes the following-
psql -h localhost -U postgres
\ir filename

Related

I need to restore db from dump and cannot do it

I have a database dump at D:/backup.dump. I try to restore my database min_ro: I open psql.exe plugin. There are words
min_ro=#
Then I write restore command:
min_ro=# psql min_ro < D:/backup.dump
Then happens nothing. My database is not restored. What is wrong? It's first time using psql.
Update. I don't need psql only - I need to restore db from dump and cannot do it.
psql is not a SQL statement, so it doesn't make sense to enter that at the psql prompt which is there to run SQL statements (or psql meta commands).
c:\> psql min_ro < D:/backup.dump
needs to be entered on the (Windows) command line, not inside psql.
You can however just run the SQL script (which I assume your dump is) by using the \i ("include") meta command in `psql``
c:\> psql min_ro
min_ro=# \i D:/backup.dump
When you restore your database at pgAdminIII (by right-click at database name then choice 'restore') you can't see .dump files at backup list by default. That was my mistake forced me to try another ways to restore DB from dump.
But if you simply change file types to 'All files' you can restore your database from dump as usially.

neo4j script file format - is there any?

I would like to predefine some graph data for neo4j and be able to load it, maybe via a console tool. I'd like it to be precisely the same as MySQL CLI and .sql files. Does anyone know if there exists a file format like .neo or .neo4j? I couldn't find such thing in the docs...
We usually do .cql or .cypher for script files. You can pipe it to the shell to run it, like so:
./neo4j-shell -c < MY_FILE.cypher
Michael Hunger was doing some great work on this feature, also, just recently. He got performance up and noise down from the console. I hope it gets into 1.9 release.
From https://groups.google.com/forum/#!topic/opencypher/PO5EnspBLs0
1:
"Sorry for the late reply, but we just wanted to inform you that the official recommendation is to use .cypher.
We'll be formalising this in the style guide soon."
2:
"In training run by Neo4j, we've historically used .cyp. I believe the preference is to use .cypher, and .cyp when an extension of 3 chars is required."
3:
"Note: '.cql' is already used for Cassandra - https://cassandra.apache.org/doc/cql/CQL.html"
From the above extracts:
1st preference is .cypher
2nd preference is .cyp (1st 3 characters of cypher)
Don't use .cql
More:
If you need a color coding in notepad++, download the xml given at https://gist.github.com/nicolewhite/b0344ea475852c8c9571 , import it via menu Language > User Defined Language > Import > Restart the Notepad++, open a file with .cypher that has some cyper query language)
Sample cypher is below:
MATCH (:Person {name: "Ann"}) -[:FB_FRIENDS]-> Create (:Person {name: "Dan"})
Hope that helps someone.
Using neo4j-client as the CLI for Neo4j allows for easy evaluation of scripts. There are several ways to work with a script containing multiple cypher commands:
You can pipe the script in via standard input, e.g.:
neo4j-client -u neo4j -P localhost < my_script.cyp
You can use the command line option --source or -i, e.g.:
neo4j-client -u neo4j -P -i my_script.cyp localhost
You can start an interactive shell, and then source the script:
$ neo4j-client localhost
Username: neo4j
Password: *****
neo4j-client 1.2.1.
Enter `:help` for usage hints.
Connected to 'neo4j://neo4j#localhost:7687'
neo4j>
neo4j> :source my_script.cyp
The extension .cyp is most commonly used for scripts.

How to browse data in MongoDB in Mac OS?

When I am using PostgreSQL, I am accustomed to use terminal for browsing data stored in DB tables.
Is there any similar way to do it for MongoDB? I have used this topic for MongoDB installation on Mac.
Thanks
MongoDB bin directory contains an executable called 'mongo' which is an interactive shell (similar to 'psql' in PostgreSQL)
You can read more about how to use it HERE.
To get started, you can type
> help
To switch to a specific database, just type:
> use db-name
^^^^^^^ replace with your db name.
> db.help()
> db.collectionName.help()
^^^^^^^^^^^^^^ replace with your collection name
You can do this from any machine not just the one mongod is running on but then you connect via:
mongo hostname:port/dbname
for example
mongo myMongoDBserver:27017/foobardb
First start mongod process in a terminal tab. In other terminal tab or window simply start mongo.
mongod is mongo daemon which establishes connections and listens to requests. mongo is the javascript shell where you can have your interactive mongodb queries.
Rest is best explained in the link #Asya Kamsky provided in his answer.
command 'mongo' will open mongo shell for you, there you can use database commands

How to dump data from mysql database to postgresql database?

I have done the depot application using mysql... Now i am in need to use postgres... So i need to dump data from mysql database "depot_development" to postgres database "depot_develop"...
Here you can find some interesting links http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL
Have you tried to copy the tables from one database to the other:
a) export the data from MySQL as a CSV file like:
$> mysql -e "SELECT * FROM table" -h HOST -u USER -p PWD -D DB > /file/path.csv'
and then,
b) import it into Postgres like:
COPY table FROM '/file/path.csv' WITH CSV;
This question is a little old but a few days ago i was dealing with this situation and found pgloader.io.
This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lips) with the following 3 lines:
/* content of the script.lisp */
LOAD DATABASE
FROM mysql://dbuser#localhost/dbname
INTO postgresql://dbuser#localhost/dbname;
/*run this in the terminal*/
pgload sctipt.lisp
And after that your postgresql DB will have all of the information that you had in your MySQL SB
On a side note, make you you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)

Optimizing Rails loading for maintenance scripts

I wrote a script that does maintenance tasks for a rails application. The script uses a class that uses models defined in the application. Just an example, let's say application defines model User, and my class (used within the script), sends messages to it, like User.find id.
I am looking for ways to optimize this script, because right now it has to load the application environment: require '../config/environment'. This takes ~15 seconds.
Had the script not use application codebase to do its job, I could have replaced model abstractions with raw SQL. But unfortunatly I can't do that because I would have to repeat the code in the script that is already present in the codebase. Not only would this violate DRY principle and require alot of work, the script would not be very maintainable, in case the model methods that I am using change.
I would like to hear ideas how to approach this problem. The script is not run from the application itself, but from the shell (with Capistrano for instance).
I hope I've described the problem clear enough. Thank you.
Could you write a little daemon that is in a read on a pipe (or named fifo, or unix domain socket, or, with more complexity, a tcp port) that accepts 'commands' that would be run on your database?
#!/usr/bin/ruby
require '../config/environment'
while (true) do
File.open("/tmp/fifo", "r") do |f|
f.each_line do |line|
case line
when "cleanup" then puts "clean!"
when "publish" then puts "published!"
else puts "invalid command, ignoring"
end
end
end
end
You could start this thing up with vixie cron's #reboot specifier, or you could run it via capistrano commands, or run it out of init or init scripts. Then you write your capistrano rules (that you have now) to simply echo commands into the fifo:
First,
mkfifo /tmp/fifo
In one terminal:
$ ./env.rb
In another terminal:
$ echo -n "cleanup" > /tmp/fifo
$ echo -n "publish" > /tmp/fifo
$ echo -n "go away" > /tmp/fifo
The output in the first terminal looks like this:
clean!
published!
invalid command, ignoring
You could make the matching as friendly (perhaps allow plain echo, rather than require echo -n as my example does) or unfriendly as you want. And the commands that get run can of course call into your model files to do their work.
Please make sure you choose a good location for your fifo -- /tmp/ is probably a bad place, as many distributions clear it on reboot. Also make sure you set the fifo owner and permission (chown and chmod) appropriately for your application -- you might not want to allow your Firefox's flash plugin to write to this file and command your database.

Resources