Importing a MongoDb Collection through Ruby (ROR) - ruby-on-rails

I'm trying to seed my database with a collection exported via the mongoexport tool, but I can't seem to find any way to use the mongoimport tool through Ruby.
I looked at the Mongo Driver for how to execute mongo queries via Ruby, and thought about iterating through each line of json from the export, but there are keys like "$oid" which give errors when attempting to do a collection.insert()
Is it possible to use the mongoimport tool in Ruby, or what's the best way to add code to seeds.rb so that it imports a mongo collection?

The mongoimport tool is actually a command-line tool. So you don't use the Mongo Driver for this.
Instead you should "shell out" and call the process. Here's a link on calling a command from the shell.
Calling shell commands from Ruby

mongoexport exports documents in an extended json format specified in the MongoDB docs.
http://www.mongodb.org/display/DOCS/Mongo+Extended+JSON
The driver doesn't read this format automatically. For seeding a database, you may want to use mongodump and mongorestore, which use the database's native BSON format. As another poster mentioned, you could easily shell out to do this.

Related

Neo4j: Unknown command :use

I'm following this doc: https://neo4j.com/developer/manage-multiple-databases
If we want to see the system information (view, create/delete, manage databases), we will need to switch to the system database.
We can do that with the :use command then telling it which database we want.
Command: :use system
Results:
But when I try it locally it says:
UnknownCommandError: Unknown command :use system
Am I doing it wrong?
As mentioned in the manual you have quoted:
Prerequisites
Please have Neo4j (version 4.0 or later) downloaded and
installed. It helps to have read the section on graph databases.
In Neo4j (v4.0+), we can create and use more than one active database
at the same time
And you seem to be using v3.5.1. Hence the issue.
Please upgrade to v4.0+ to be able to manage multiple databases.
In Neo4j 3.5.x or below, you can use change the neo4j.conf config property dbms.active_database to change which database the dbms will use, this may be easier for you than replacing file contents.

Neo4j - Unable to run multiple cypher files with Neo4J Docker 4.2.5

I am using neo4j with docker version 4.2.5 and cypher file to initialize data.
I am setting following environment property to initialize data with docker-compose -
NEO4J_apoc_initializer_cypher=CALL apoc.cypher.runFile ("file:///sample.cypher")
This works fine and load the data after server started.
However, you cannot create indexes in this file. You need runSchema call to create indexes.
As per doc for neo4j 4.2, you can run multiple cypher,
NEO4J_apoc_initializer_cypher_0=CALL apoc.cypher.runSchemaFile ("file:///schema.cypher")
NEO4J_apoc_initializer_cypher_1=CALL apoc.cypher.runFile ("file:///sample.cypher")
However, this throws error:
unknown settings: apoc.initilizer.cypher.0
unknown settings: apoc.initilizer.cypher.1
Can someone please help me with this?
This has been resolved. It seems the issue was the wrong documentation on how to run multiple cypher queries. Please check this link for more details

neo4j dump error: database does not exist

I'm new to neo4j.
I have created a new graph/database named db-learning. I'm able to connect and perform some operations on the database via neo4j browser. No issue at all.
However when I tried to dump it using neo4j-admin dump --database "db-learning" --to "/some/path" I get this error saying database not found.
Database does not exist: db-learning
Am I missing something?
Sorry if that's confusing. The database name in the project is not related to the underlying database name (which is neo4j for the default database)
So if you open the terminal, this should be good enough:
./bin/neo4j-admin dump --database "neo4j" --to "/tmp/test.dump"
I think you can also leave off the default database name.
Getting the same issue when neo4j is in fact an existing DB. I don't know how the N4J team managed to overcomplicate this so much but this whole process is such a nightmare.
The Aura service only accepts .dump files, which have to be generated via neo4j-admin. This won't allow remote DBs so you have to pull down a neo4j directory (for instance, from a graphenedb.com dump), load it locally, then export that via neo4j-admin -> dump file in order to upload and import into a Aura instance.
Has anyone at Neo4j actually used their own software?

After downloading a Symfony aplication which command(s) do I have to execute in order to configure the database?

Let's say I want to download a Symfony's complete app, for instance, Jobbet
I'll have everything necessary to run the app in my desktop but it wouldn't really work with an empty database. Is there a terminal command to create and fill the database with everything that the app requires?
First, configure your database, either by command line, or editing the "/config/databases.yml" file.
> php symfony configure:database "mysql:host=YOURHOST;dbname=YOURDBNAME" YOURDBUSER YOURDBPASS
Next, if you want to generate everything, forms, filters, models and data, run the following command:
For Doctrine ORM:
php symfony doctrine:build --all --and-load
For Propel ORM:
php symfony propel:build --all --and-load
This should get you up and running. You should definitely look at the tutorial for Jobeet posted on the Symfony Project website for more information on how this project works:
Doctrine: http://www.symfony-project.org/jobeet/1_4/Doctrine/en/
Propel: http://www.symfony-project.org/jobeet/1_4/Propel/en/
You can either edit config/databases.yml file or use configure:database task. For more info run:
./symfony help configure:database

How to integrate a Liquibase migration into my grails build?

I have a Liquibase migration that I manually run to load seed data from several CSV files into my database. I would like to run this migration each time I run grails run-app.
I think I have two questions in one:
How to I integrate the migrate
command into my grails run-app ?
How do I clear the DATABASECHANGELOG
to allow me to run the same
migration over and over?
Or, is there a better way to load a lot of data into a DB from CSV files?
Question 1 - To integrate migrate command into run-app, you should listen for events thrown in run-app scripts. This is explained here, and a more complete article is here.
Question 2 - For clearing the database, perhaps you can write a migration that clears the db for you? The way I do it is use a little script I wrote that just drops and creates a db. It's for MySQL:
target(dropdb: "The description of the script goes here!") {
def x = 'mysql -u root --password=XXXX -e "drop database yourdb; create database yourdb default character set utf8; " '.execute();
x.waitFor()
println "Exit Value ${x.exitValue()}"
}
setDefaultTarget(dropdb)
Question #2: If you have particular changeSets you want to run every time, there is an "alwaysRun" attribute you can set on the changeSet tag.
For my money, it's easier to read the Liquibase Gant scripts and replicate what they do. They're simple and you'll have more insight into what's happening.
You should use the autobase plugin. It will run your migrations when the application starts.
It has a script to convert from an xml changelog to a groovy one as well so you don't have to manually convert it.

Resources