Configuring postgresql in rails - ruby-on-rails

I am working on a project with a friend. I cloned the application from bitbucket. Everything was fine except postgresql (v9.3.7) . It keeps giving me the following message.
psql: FATAL: password authentication failed for user "ubuntu"
FATAL: password authentication failed for user "ubuntu"
I have created a superuser as well as all the databases. The designated users and the list of all the databases is given below.
ubuntu=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------+-----------
divjot | Superuser | {}
postgres | Superuser, Create role, Create DB, Replication | {}
ubuntu | Superuser, Create role, Create DB | {}
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------------+----------+-----------+---------+-------+-----------------------
app_development | ubuntu | SQL_ASCII | C | C |
app_production | ubuntu | SQL_ASCII | C | C |
app_test | ubuntu | SQL_ASCII | C | C |
postgres | postgres | SQL_ASCII | C | C |
template0 | postgres | SQL_ASCII | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | SQL_ASCII | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
ubuntu | ubuntu | SQL_ASCII | C | C |
I have always struggled with postgresql configuration in rails. I follow the documentation completely, however, every time I try to clone application or move the code, I always run into problems. I am not sure why I am getting this error. Any help regarding this would be greatly appreciated. Thanks!!

Disclaimer: I'm not an expert on pgsql. But I've successfully set up pg/rails in several versions and environments. Here's what I suggest.
Find the pg_hba.conf file. Since you are apparently using Ubuntu, try
cd /
find . -name pg_hba.conf -print 2> /dev/null
This will search your whole disk, which will take a while. But eventually it will provide the path. If it produces more than one, you'll have to resolve which one is correct.
If you know where PG is installed, cd there instead of root.
Now
Verify the auth method for user ubuntu is password or maybe md5. Here is the relevant docs page. If you're interested only in local development, you can change password to trust. But if this is for production, that's a bad idea.
While logged into the pg command line, run
ALTER USER ubuntu PASSWORD 'newpassword';
to ensure the password is what you think it is.
You should post database.yaml or ENV['DATABASE_URL'] settings. In general, database.yaml needs to match precisely what pg expects.
For example:
development:
adapter: postgresql
encoding: unicode
database: app_development
pool: 5
username: ubuntu
password: <your password>
allow_concurrency: true
Caveat: Don't commit production passwords to your repos or even dev passwords if you don't totally control the repo.

Related

Neo4J: "Database in use: false" after loading dump + migrate

I'm running a Neo4J 5.3.0 community edition and tried to load the dump from https://github.com/neo4j-graph-examples/twitch. I managed to load the dump and migrate the database.
My question: how do I use the "twitch" database?
cosh#osmingestor:~$ sudo neo4j-admin database info
Database name: neo4j
Database in use: true
Last committed transaction id:-1
Store needs recovery: true
Database name: system
Database in use: true
Last committed transaction id:-1
Store needs recovery: true
Database name: twitch
Database in use: false
Store format version: record-aligned-1.1
Store format introduced in: 5.0.0
Last committed transaction id:62742
Store needs recovery: false
show databases doesn't show it for my user "neo4j":
neo4j#neo4j> show databases;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| name | type | aliases | access | address | role | writer | requestedStatus | currentStatus | statusMessage | default | home | constituents |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "neo4j" | "standard" | [] | "read-write" | "localhost:7687" | "primary" | TRUE | "online" | "online" | "" | TRUE | TRUE | [] |
| "system" | "system" | [] | "read-write" | "localhost:7687" | "primary" | TRUE | "online" | "online" | "" | FALSE | FALSE | [] |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I tried to grant access to the database by executing:
neo4j#neo4j> grant all database PRIVILEGES ON DATABASES * TO neo4j;
Unsupported administration command: grant all database PRIVILEGES ON DATABASES * TO neo4j
Adding "neo4j" as the default admin did not help as well (was working though):
neo4j-admin dbms set-default-admin neo4j

Ruby on Rails to Postgres Binary encoding

I have a Ruby on Rails + Postgresql app that saves file images into db, in some postgres installations data is saved erroneously in what it looks like a different bytea format, so the image in some postgres installation fails to return file correctly
WORKING SAVED DATA
\xFF\xD8\xFF\xE1\x00\x\xFF\xEE\x00\x0EAdobe\x00d\xC0\x00\x00\x00\x01\xFF\xDB\x00\x84\x00\x06\x04\x04\x04\x05\x04\x06\x05\x05\x06\t\x06\x05\x06\t\v\b\x06\x06\b\v\f\n\n\v\n\n\f\x10\f\f\f\f\f\f\x10\f\x0E\x0F\x10\x0F\x0E\f\x13\x13\x14\x14\x13\x13\x1C\e\e\e\x1C\x1F\x1F\x1F\x1F\x1F\x1F\x1F\x1F\x1F\x1F\x01\a\a\a\r\f\r\x18\x10\x10\x18\x1A\x15\x11\x15
NON WORKING SAVED DATA
x89504e470d0a1a0a0000000d494844520000014a0000003d0806000000cb7920c80000001974455874536f6674776172650041646f626520496d616765526561647971c9653c000047eb4944415478daec5d07781455d79e99edbdef269bb62940208426bd4a5110503e51c40e28d8051145fcec62ff4041c08628624169a28234419ad2a477d2934db2bdf736ffb95bc226a46c4240f1cf7d9e81ecececade7bef73de79e7b062749126b4b6da92db5a5b6d470
any suggestions??
RoR
On RoR i do this to save the file
data = (!file.nil?) ? file.read.force_encoding("UTF-8") : nil
Brand.create!({name: brand[:name], value: brand[:value], file: file, file_properties: get_file_properties(file_url), default: brand[:default], data: data})
and to get the the file image
image_tag(o.get_base64_image, style: 'max-width:100%')
With the model method
def get_base64_image
return self.data != nil ? 'data:image/png;base64,' + Base64.encode64(self.data) : ""
end
Server encoding for both servers is UTF-8
Postgres
potgresql.conf
bytea_output = 'escape'
Working Server
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------------+------------+----------+-------------+-------------+---------------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
Non-working server
List of databases
Name | Owner | Encoding | Collation | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | es_MX.UTF-8 | es_MX.UTF-8 |
The issue is related to the output of BYTEA column in postgresql, i solved like this:
ALTER ROLE postgres SET bytea_output TO 'escape';

Removal of Role PostgreSQL Failed - cache lookup failed for database

This is my first time using PostgreSQL for production.
I made a database blog_production with username blog_production and generated password from gemfile capistrano-postgresql. Once it is generated, I tried to delete database blog_production with this command from terminal:
$ sudo -u postgres dropdb blog_production
After that I tried to delete user blog_production with this command:
$ sudo -u postgres droprole blog_production
And it returned dropuser: removal of role "blog_production" failed: ERROR: cache lookup failed for database 16417
1.) Why is this happening?
2.) I also tried to delete from psql using DELETE FROM pg_roles WHERE rolname='blog_production' but it returned the same error (cache lookup failed)
3.) How do I solve this problem?
Thank you.
Additional Information
PostgreSQL Version
PostgreSQL 9.1.15 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
(1 row)
select * from pg_shdepend where dbid = 16417;
dbid | classid | objid | objsubid | refclassid | refobjid | deptype
-------+---------+-------+----------+------------+----------+---------
16417 | 1259 | 16419 | 0 | 1260 | 16418 | o
16417 | 1259 | 16426 | 0 | 1260 | 16418 | o
16417 | 1259 | 16428 | 0 | 1260 | 16418 | o
(3 rows)
select * from pg_database where oid = 16417;
datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datacl
---------+--------+----------+------------+----------+---------------+--------------+--------------+---------------+--------------+---------------+--------
(0 rows)
select * from pg_authid where rolname = 'blog_production'
rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcatupdate | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil
-----------------+----------+------------+---------------+-------------+--------------+-------------+----------------+--------------+-------------------------------------+---------------
blog_production | f | t | f | f | f | t | f | -1 | md5d4d2f8789ab11ba2bd019bab8be627e6 |
(1 row)
Somehow the DROP database; didn't drop the shared dependencies correctly. PostgreSQL still thinks that your user owns three tables in the database you dropped.
Needless to say this should not happen; it's almost certainly a bug, though I don't know how we'd even begin tracking it down unless you know exactly what commands you ran etc to get to this point, right from creating the DB.
If the PostgreSQL install's data isn't very big and if you can share the contents, can I get you to stop the database server and make a tarball of the whole database directory, then send it to me? I'd like to see if I can tell what happened to get you to this point.
Send a dropbox link to craig#2ndquadrant.com . Just:
sudo service postgresql stop
sudo tar cpjf ~abrahamks/abrahamks-postgres.tar.gz \
/var/lib/postgresql/9.1/main \
/etc/postgresql/9.1/main \
/var/log/postgresql/postgresql-9.1-main-*.
/usr/lib/postgresql/9.1
sudo chown abrahamks ~abrahamks/abrahamks-postgres.tar.gz
and upload abrahamks-postgres.tar.gz from your home folder.
Replace abrahamks with your username on your system. You might need to adjust the paths above if I'm misremembering where the PostgreSQL data lives on Debian-derived systems.
Note that this contains all your databases not just the one that was an issue, and it also contains your PostgreSQL user accounts.
(If you're going to send me a copy, do so before continuing):
Anyway, since the database is dropped, it is safe to manually remove the dependencies that should've been removed by DROP DATABASE:
DELETE FROM pg_shdepend WHERE dbid = 16417
It should then be possible to DROP USER blog_production;

Collation so ordering doesn't work properly with postresql

I am developing a rails app which has its content in Turkish. I am using Postgresql 9.2.2 as my database backend. Everything works fine (no weird character issues etc. ) except for proper ordering.
For example, when I try to list some items that are ordered by city they are in, I expect something like "Adana, Bursa, İstanbul, Giresun, Zonguldak ..".
Instead, I always get Turkish specific characters at the end/beginning of the list. (i.e."Adana, Bursa, Giresun, Zonguldak, İstanbul")
I have initialized my postgres db with command: initdb /usr/local/var/postgres -E utf8 --locale=tr_TR
when I \l in psql console I get the expected.
Name Owner Encoding Collate Ctype
----------------+-------------+----------+---------+-------+
app_development | app | UTF8 | tr_TR | tr_TR |
app_production | app | UTF8 | tr_TR | tr_TR |
app_test | app | UTF8 | tr_TR | tr_TR |
postgres | monkegjinni | UTF8 | tr_TR | tr_TR |
I also tried to create databases manually with LC_CTYPE="tr_TR.UTF-8" and LC_COLLATE="tr_TR.UTF-8", and again no progress.
Some information about my development environment:
Running Mountain Lion 10.8.2 with Macbook Pro 7.1
psql --version : 9.2.2
rails --version : 3.2.11
$ locale :
LANG="tr_TR.UTF-8"
LC_COLLATE="tr_TR.UTF-8"
LC_CTYPE="tr_TR.UTF-8"
LC_MESSAGES="tr_TR.UTF-8"
LC_MONETARY="tr_TR.UTF-8"
LC_NUMERIC="tr_TR.UTF-8"
LC_TIME="tr_TR.UTF-8"
LC_ALL=
How can I resolve this issue?
UTF-8 locales on OS X are broken. You can try to use a non-UTF-8 locale (tr_TR.ISO8859-9), but it appears that it is also broken in this respect. So it's not going to work.

Rails mysql2 migration breaks utf8 strings

I'm migrating old mysql database to new one so I wrote a script that connects to old database, reads it and creates new entities in new database. Here it is:
old_db = Mysql2::Client.new(host: options[:db_host],
username: options[:db_user],
password: options[:db_password],
database: options[:db_name],
encoding: 'utf8')
old_categories = old_db.query('select id, title from catalog__category order by lvl asc')
old_categories.each do |old_c|
c = Catalog::Category.new
c.name = old_c["title"]
c.save!
end
However after migration categories names appeared in really bad shape.
Both databases encoded in utf8. Client and server sets utf8 encoding
mysql> show variables like "%character%";
+--------------------------+------------------------------------------------------+
| Variable_name | Value |
+--------------------------+------------------------------------------------------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/local/Cellar/mysql/5.5.27/share/mysql/charsets/ |
PHP project uses old database and shows all strings correctly,
Rails project uses new database and shows correctly everything, but imported categories strings.
Does anyone knows where is the problem and how to fix it?
Thank you.
The root problem with bad encoding in database itself (it was changed from latin1 to utf8 on the fly some time ago). As a result strings was encoded twice in dump.
mysqldump --default-character-set=latin1 --skip-set-charset -u <user> -p > b.sql
This command help to generate correct dump

Resources