I'm used to creating the PDO object with something like this in the 4th parameter (driver options):
array(\PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES {$this->charset} COLLATE {$this->collation}")
How can I tell symfony 2 to do this? In the configuration file I can only see a 'charset' option.
I also need to create all the tables with a specific collation: utf8_unicode_ci
What can I do to have all the tables created through the command line be created with that collation instead of latin1?
I have been facing the same problem. It seems that has to do with DBAL configuration. I have found in the PDO documentation the following under the PDO_MYSQL DSN — Connecting to MySQL databases:
charset Currently ignored.
My solution was to apply the collation manually at the database level. Then all tables created in Symfony2 with the schema update command will have the correct collation.
I also added this line to my Sf2 doctrine: dbal configuration in the config.yml:
charset: utf8
Related
I would like to use icu collation in PostgreSQL with rails application.
I specified icu collation at database.yml like this,
adapter: postgresql
ctype: ja-x-icu
collation: ja-x-icu
but I got the following error:
Caused by:
PG::WrongObjectType: ERROR: invalid locale name: "ja-x-icu"
I found some questions which said that we cannot use icu collation in "CREATE DATABASE".
Get und-x-icu as collation and character type in Postgres 10 and win server 2008
Is this situation still the same as this question now?
If so, how can I create database with icu collation?
Thank you in advance.
(I'm using Rails 7 and Postgres 11 but I can move to further version if necessary.)
That is only supported with PostgreSQL v15 or better.
If you are using v15 or better, I guess your mistake is that you simply used
initdb --encoding=UTF8 --locale=ja-x-icu datadir
The files belonging to this database system will be owned by user "laurenz".
This user must also own the server process.
initdb: error: invalid locale name "ja-x-icu"
You have to do that differently:
initdb --encoding=UTF8 \
--locale-provider=icu --locale=ja_JP.utf8 --icu-locale=ja-x-icu datadir
Use --locale for the C library locale. Even if you are using an ICU collation (--icu-locale=ja-x-icu --locale-provider=icu), you have to specify a C library locale as well.
I'm following this doc: https://neo4j.com/developer/manage-multiple-databases
If we want to see the system information (view, create/delete, manage databases), we will need to switch to the system database.
We can do that with the :use command then telling it which database we want.
Command: :use system
Results:
But when I try it locally it says:
UnknownCommandError: Unknown command :use system
Am I doing it wrong?
As mentioned in the manual you have quoted:
Prerequisites
Please have Neo4j (version 4.0 or later) downloaded and
installed. It helps to have read the section on graph databases.
In Neo4j (v4.0+), we can create and use more than one active database
at the same time
And you seem to be using v3.5.1. Hence the issue.
Please upgrade to v4.0+ to be able to manage multiple databases.
In Neo4j 3.5.x or below, you can use change the neo4j.conf config property dbms.active_database to change which database the dbms will use, this may be easier for you than replacing file contents.
I'm new to neo4j.
I have created a new graph/database named db-learning. I'm able to connect and perform some operations on the database via neo4j browser. No issue at all.
However when I tried to dump it using neo4j-admin dump --database "db-learning" --to "/some/path" I get this error saying database not found.
Database does not exist: db-learning
Am I missing something?
Sorry if that's confusing. The database name in the project is not related to the underlying database name (which is neo4j for the default database)
So if you open the terminal, this should be good enough:
./bin/neo4j-admin dump --database "neo4j" --to "/tmp/test.dump"
I think you can also leave off the default database name.
Getting the same issue when neo4j is in fact an existing DB. I don't know how the N4J team managed to overcomplicate this so much but this whole process is such a nightmare.
The Aura service only accepts .dump files, which have to be generated via neo4j-admin. This won't allow remote DBs so you have to pull down a neo4j directory (for instance, from a graphenedb.com dump), load it locally, then export that via neo4j-admin -> dump file in order to upload and import into a Aura instance.
Has anyone at Neo4j actually used their own software?
I'm trying to seed my database with a collection exported via the mongoexport tool, but I can't seem to find any way to use the mongoimport tool through Ruby.
I looked at the Mongo Driver for how to execute mongo queries via Ruby, and thought about iterating through each line of json from the export, but there are keys like "$oid" which give errors when attempting to do a collection.insert()
Is it possible to use the mongoimport tool in Ruby, or what's the best way to add code to seeds.rb so that it imports a mongo collection?
The mongoimport tool is actually a command-line tool. So you don't use the Mongo Driver for this.
Instead you should "shell out" and call the process. Here's a link on calling a command from the shell.
Calling shell commands from Ruby
mongoexport exports documents in an extended json format specified in the MongoDB docs.
http://www.mongodb.org/display/DOCS/Mongo+Extended+JSON
The driver doesn't read this format automatically. For seeding a database, you may want to use mongodump and mongorestore, which use the database's native BSON format. As another poster mentioned, you could easily shell out to do this.
I have a Liquibase migration that I manually run to load seed data from several CSV files into my database. I would like to run this migration each time I run grails run-app.
I think I have two questions in one:
How to I integrate the migrate
command into my grails run-app ?
How do I clear the DATABASECHANGELOG
to allow me to run the same
migration over and over?
Or, is there a better way to load a lot of data into a DB from CSV files?
Question 1 - To integrate migrate command into run-app, you should listen for events thrown in run-app scripts. This is explained here, and a more complete article is here.
Question 2 - For clearing the database, perhaps you can write a migration that clears the db for you? The way I do it is use a little script I wrote that just drops and creates a db. It's for MySQL:
target(dropdb: "The description of the script goes here!") {
def x = 'mysql -u root --password=XXXX -e "drop database yourdb; create database yourdb default character set utf8; " '.execute();
x.waitFor()
println "Exit Value ${x.exitValue()}"
}
setDefaultTarget(dropdb)
Question #2: If you have particular changeSets you want to run every time, there is an "alwaysRun" attribute you can set on the changeSet tag.
For my money, it's easier to read the Liquibase Gant scripts and replicate what they do. They're simple and you'll have more insight into what's happening.
You should use the autobase plugin. It will run your migrations when the application starts.
It has a script to convert from an xml changelog to a groovy one as well so you don't have to manually convert it.