pgsync cannot connect to Amazon PostgreSQL database on RDS - ruby-on-rails

I am trying to synchronize data from my local dev database to a test DB running on Amazon RDS using the pgsync gem.
My .pgsync.yml page is simple:
from: postgres://localhost:5432/imports_development?sslmode=require
to: [See attempts below]
exclude:
- [A few tables]
I have tried many approaches but none of them are working. Here are all approaches and the error messages I've received:
to: postgres://awsuser:mypassword#imports-test.abcdefg.us-east-1.rds.amazonaws.com/postgres?sslca=config/rds-combined-ca-bundle.pem
=> invalid URI query parameter: "sslca"
to: postgres://awsuser:mypassword#imports-test.abcdefg.us-east-1.rds.amazonaws.com
=> connection to server at "52.4.150.10", port 5432 failed: FATAL: database "awsuser" does not exist
to: $(heroku config:get DATABASE_URL)
=> invalid URI parameter: "sslca"
to: imports-test.abcdefg.us-east-1.rds.amazonaws.com
=> connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: database "imports-test.abcdefg.us-east-1.rds.amazonaws.com" does not exist
These are the credentials used by database.yml for the RDS database:
rds_db_name: postgres
rds_username: awsuser
rds_password: mypassword
rds_hostname: imports-test.abcdefg.us-east-1.rds.amazonaws.com
rds_port: 5432
I can connect to both databases with rails console, so it should just be a matter of getting the above statements right. What is missing here?

Have you tried postgres://awsuser:mypassword#imports-test.abcdefg.us-east-1.rds.amazonaws.com/postgres ? Attempt no.2 seems to work, but doesn't include the DB name, based on the error.
Roughly the URI is in the format:
postgres://{USER}:{PASS}#{HOSTNAME}/{DBNAME}
The ?sslca=123 and other options AFTER ? (the query string) is meant as options. Not all clients support all options here.

Related

Can't connect Rails app to Postgres using PG gem

I'm trying to connect a rails app to a remote Postgres database (using the PG gem) and getting issues.
I set my database.yaml file
development:
adapter: postgresql
encoding: unicode
database: testdb
username: testuser
password: "*******"
host: **.***.***.***
port: 5432
Then in my rails controller I have this line to connect to the db
conn = PG.connect( :dbname => 'testdb' )
And I get this error in response
"status": "error",
"message": "could not connect to server: No such file or directory\n\tIs the server running locally and accepting\n\tconnections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\n"
I was under the impression that PG would pull from the database connection loaded in rails. If I do an inspect on the db configuration I see that it's pulling the database.yaml parameters
#<ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fd5ae0562d0 #env_name="development", #name="primary", #configuration_hash={:adapter=>"postgresql", :encoding=>"unicode", :database=>"testdb", :username=>"testuser", :password=>"*****", :host=>"**.***.***.***", :port=>5432}>
I'm able to successfully connect though if I pass in the necessary variables or connection string to PG instead
conn = PG::Connection.new( "postgresql://testuser:*******#**.***.***.***:5432/testdb" )

Elasticsearch on Docker - Failed to create enrollment token when generating API key

Going through the Elasticsearch docs for setting up Elasticsearch/Kibana with Docker, but I'm getting several errors. I follow the steps exactly. I'm running this on an Ubuntu 20.04 EC2 instance. What am I doing wrong?
Here's what I did:
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.0.0
docker pull docker.elastic.co/kibana/kibana:8.0.0
docker network create elastic
docker run --name es01 --net elastic -p 9200:9200 -it docker.elastic.co/elasticsearch/elasticsearch:8.0.0
After step 4, Elasticsearch says:
A password is generated for the elastic user and output to the terminal, plus enrollment tokens for enrolling Kibana and adding additional nodes to your cluster.
I get neither. Instead, I get these error logs:
{"#timestamp":"2022-02-24T22:28:24.318Z", "log.level":"ERROR", "message":"Failed to create enrollment token when generating API key", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[a178a1e9d98b][generic][T#4]","log.logger":"org.elasticsearch.xpack.security.enrollment.InternalEnrollmentTokenGenerator","elasticsearch.cluster.uuid":"mC4ceJ2nT7i9-QF6V537Zg","elasticsearch.node.id":"IthIKocaSACunHatZxePPw","elasticsearch.node.name":"a178a1e9d98b","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException","error.message":"[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][fobYLX8BZdXU5J2_mb_p], source[{\"doc_type\":\"api_key\",\"creation_time\":1645741644265,\"expiration_time\":1645743444265,\"api_key_invalidated\":false,\"api_key_hash\":\"{PBKDF2}10000$PbPNTKm9i5HBuHO+W9snM/+0C1sf4OGjE3xC1m3xKew=$oQXD/UOSgR/hDNHz1IgNKoVOG4Zi0LkiPQW3IMPnRtA=\",\"role_descriptors\":{\"create_enrollment_token\":{\"cluster\":[\"cluster:admin/xpack/security/enroll/node\"],\"indices\":[],\"applications\":[],\"run_as\":[],\"metadata\":{},\"type\":\"role\"}},\"limited_by_role_descriptors\":{\"superuser\":{\"cluster\":[\"all\"],\"indices\":[{\"names\":[\"*\"],\"privileges\":[\"all\"],\"allow_restricted_indices\":false},{\"names\":[\"*\"],\"privileges\":[\"monitor\",\"read\",\"view_index_metadata\",\"read_cross_cluster\"],\"allow_restricted_indices\":true}],\"applications\":[{\"application\":\"*\",\"privileges\":[\"*\"],\"resources\":[\"*\"]}],\"run_as\":[\"*\"],\"metadata\":{\"_reserved\":true},\"type\":\"role\"}},\"name\":\"enrollment_token_API_key_fYbYLX8BZdXU5J2_mb_p\",\"version\":8000099,\"metadata_flattened\":null,\"creator\":{\"principal\":\"_xpack_security\",\"full_name\":null,\"email\":null,\"metadata\":{},\"realm\":\"__attach\",\"realm_type\":\"__attach\"}}]}] blocking until refresh]","error.stack_trace":"org.elasticsearch.action.UnavailableShardsException: [.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][fobYLX8BZdXU5J2_mb_p], source[{\"doc_type\":\"api_key\",\"creation_time\":1645741644265,\"expiration_time\":1645743444265,\"api_key_invalidated\":false,\"api_key_hash\":\"{PBKDF2}10000$PbPNTKm9i5HBuHO+W9snM/+0C1sf4OGjE3xC1m3xKew=$oQXD/UOSgR/hDNHz1IgNKoVOG4Zi0LkiPQW3IMPnRtA=\",\"role_descriptors\":{\"create_enrollment_token\":{\"cluster\":[\"cluster:admin/xpack/security/enroll/node\"],\"indices\":[],\"applications\":[],\"run_as\":[],\"metadata\":{},\"type\":\"role\"}},\"limited_by_role_descriptors\":{\"superuser\":{\"cluster\":[\"all\"],\"indices\":[{\"names\":[\"*\"],\"privileges\":[\"all\"],\"allow_restricted_indices\":false},{\"names\":[\"*\"],\"privileges\":[\"monitor\",\"read\",\"view_index_metadata\",\"read_cross_cluster\"],\"allow_restricted_indices\":true}],\"applications\":[{\"application\":\"*\",\"privileges\":[\"*\"],\"resources\":[\"*\"]}],\"run_as\":[\"*\"],\"metadata\":{\"_reserved\":true},\"type\":\"role\"}},\"name\":\"enrollment_token_API_key_fYbYLX8BZdXU5J2_mb_p\",\"version\":8000099,\"metadata_flattened\":null,\"creator\":{\"principal\":\"_xpack_security\",\"full_name\":null,\"email\":null,\"metadata\":{},\"realm\":\"__attach\",\"realm_type\":\"__attach\"}}]}] blocking until refresh]\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1076)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:872)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1031)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263)\n\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:651)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:717)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
{"#timestamp":"2022-02-24T22:28:47.612Z", "log.level":"ERROR", "message":"error downloading geoip database [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[a178a1e9d98b][generic][T#6]","log.logger":"org.elasticsearch.ingest.geoip.GeoIpDownloader","elasticsearch.cluster.uuid":"mC4ceJ2nT7i9-QF6V537Zg","elasticsearch.node.id":"IthIKocaSACunHatZxePPw","elasticsearch.node.name":"a178a1e9d98b","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException","error.message":"[.geoip_databases][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.geoip_databases][0]] containing [index {[.geoip_databases][GeoLite2-ASN.mmdb_0_1645741637264], source[n/a, actual length: [1mb], max length: 2kb]}]]","error.stack_trace":"org.elasticsearch.action.UnavailableShardsException: [.geoip_databases][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.geoip_databases][0]] containing [index {[.geoip_databases][GeoLite2-ASN.mmdb_0_1645741637264], source[n/a, actual length: [1mb], max length: 2kb]}]]\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1076)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:872)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1031)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263)\n\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:651)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:717)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
{"#timestamp":"2022-02-24T22:28:54.310Z", "log.level":"ERROR", "message":"Failed to generate credentials for the elastic built-in superuser", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[a178a1e9d98b][generic][T#7]","log.logger":"org.elasticsearch.xpack.security.InitialNodeSecurityAutoConfiguration","elasticsearch.cluster.uuid":"mC4ceJ2nT7i9-QF6V537Zg","elasticsearch.node.id":"IthIKocaSACunHatZxePPw","elasticsearch.node.name":"a178a1e9d98b","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException","error.message":"[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][reserved-user-elastic], source[{\"password\":\"ff1DWkSBw4Cju0b8U7PM\",\"enabled\":true,\"type\":\"reserved-user\"}]}] and a refresh]","error.stack_trace":"org.elasticsearch.action.UnavailableShardsException: [.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][reserved-user-elastic], source[{\"password\":\"ff1DWkSBw4Cju0b8U7PM\",\"enabled\":true,\"type\":\"reserved-user\"}]}] and a refresh]\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1076)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:872)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1031)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263)\n\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:651)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:717)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
They are long messages. Here's the error portion of the above:
"Failed to create enrollment token when generating API key"
"error downloading geoip database [GeoLite2-ASN.mmdb]"
"Failed to generate credentials for the elastic built-in superuser"
"error downloading geoip database [GeoLite2-City.mmdb]"
I assume your problem is because of the network, since you got failed when attempting to downloading geoip database and you use docker to run it.
https://www.elastic.co/blog/docker-networking
When running Elasticsearch, you will need to ensure it publishes to an
IP address that is reachable from outside the container; this can be
configured via the setting network.publish_host.
I did not have enough storage space.
A gentleman on the Elasticsearch Slack channel was kind enough to point out that this was the real culprit:
"error.type": "org.elasticsearch.action.UnavailableShardsException", "error.message": "[.security-7][0] primary shard is not active Timeout: [1m],
I looked at my available host system storage space and found there was only 17G available! Cleaning up my Trash bin fixed the issue. Works now. Hopefully this helps someone else!

Rails CQL cannot connect to AWS Keyspaces (AWS Cassandra)

I am trying to connect from a Ruby on Rails application to AWS Keyspaces (AWS Cassandra), but I cannot manage to do it. I use the cequel gem and generated the config/cequel.yml which contains a similar thing to the following:
development:
host: "CONTACT_POINT"
username: "USER"
password: "PASS"
port: 9142
keyspace: key_development
max_retries: 3
retry_delay: 0.5
newrelic: true
ssl: true
server_cert: 'config/certs/AmazonRootCA1.pem'
replication:
class: NetworkTopologyStrategy
datacenter1: 3
datacenter2: 2
durable_writes: false
(Credentials where used in another app and they work which is working as expected.)
when I try to run:
rake cequel:keyspace:create
I get the following errors:
Cassandra::Errors::NoHostsAvailable: All attempted hosts failed: x.xxx.xxx.xxx (Cassandra::Errors::ServerError: Internal Server Error)
Set the dc to us-east-1 . drop the replication definition.

Configuring Backup gem in Rails 5.2 - Performing backup of PostgreSQL database

I would like to perform a regular backup of a PostgreSQL database, my current intention is to use the Backup and Whenever gems. I am relatively new to Rails and Postgres, so there is every chance I am making a very simple mistake...
I am currently trying to setup the process on my development machine (MAC), but keep getting an error when trying to connect to the database.
In the terminal window, I have performed the following to check the details of my database and connection:
psql -d my_db_name
my_db_name=# \conninfo
You are connected to database "my_db_name" as user "my_MAC_username" via socket in "/tmp" at port "5432".
\q
I have also manually created a backup of the database:
pg_dump -U my_MAC_username -p 5432 my_db_name > name_of_backup_file
However, when I try to repeat this within db_backup.rb (created by the Backup gem) I get the following error:
[2018/10/03 19:59:00][error] Model::Error: Backup for Description for db_backup (db_backup) Failed!
--- Wrapped Exception ---
Database::PostgreSQL::Error: Dump Failed!
Pipeline STDERR Messages:
(Note: may be interleaved if multiple commands returned error messages)
pg_dump: [archiver (db)] connection to database "my_db_name" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/pg.sock/.s.PGSQL.5432"?
The following system errors were returned:
Errno::EPERM: Operation not permitted - 'pg_dump' returned exit code: 1
The contents of my db_backup.rb:
Model.new(:db_backup, 'Description for db_backup') do
##
# PostgreSQL [Database]
#
database PostgreSQL do |db|
# To dump all databases, set `db.name = :all` (or leave blank)
db.name = "my_db_name"
db.username = "my_MAC_username"
#db.password = ""
db.host = "localhost"
db.port = 5432
db.socket = "/tmp/pg.sock"
# When dumping all databases, `skip_tables` and `only_tables` are ignored.
# db.skip_tables = ["skip", "these", "tables"]
# db.only_tables = ["only", "these", "tables"]
# db.additional_options = ["-xc", "-E=utf8"]
end
end
Please could you suggest what I need to do to resolve this issue and perform the same backup through the db_backup.rb code
In case someone else gets stuck in a similar situation, the key to unlocking this problem was the lines:
psql -d my_db_name
my_db_name=# \conninfo
I realised that I needed to change db.socket = "/tmp/pg.sock" to db.socket = "/tmp", which seems to have resolved the issue.
However, I don't understand why the path on my computer differs to the default as I didn't do anything to customise the installation of any gems or the Postgres App

Mongoid only working in development. Throws error both is test and production

When I start my rails console as:
$ RAILS_ENV=development rails console
every thing seems to be working fine.
Mongoid is able to connect to mongodb and fetch records.
But with:
$ RAILS_ENV=test rails console
$ RAILS_ENV=production rails console
it's throwing up errors as:
Rack::File headers parameter replaces cache_control after Rack 1.5.
/usr/local/lib/ruby/gems/1.9.1/gems/mongoid-3.0.16/lib/mongoid/criteria.rb:585:in `check_for_missing_documents!': (Mongoid::Errors::DocumentNotFound)
Problem:
Document(s) not found for class Actor with id(s) 50e5259f53c205d815000001.
Summary:
When calling Actor.find with an id or array of ids, each parameter must match a document in the database or this error will be raised. The search was for the id(s): 50e5259f53c205d815000001 ... (1 total) and the following ids were not found: 50e5259f53c205d815000001.
Resolution:
Search for an id that is in the database or set the Mongoid.raise_not_found_error configuration option to false, which will cause a nil to be returned instead of raising this error when searching for a single id, or only the matched documents when searching for multiples.
My config/mongoid.yml has the exact same set of lines for all three environments.
I'm not able to figure out why it isn't able to connect in test and production.
Update:
Mongoid.yml
development:
sessions:
default:
database: tgmd
hosts:
- localhost:27017
test:
sessions:
default:
database: tgmd
hosts:
- localhost:27017
production:
sessions:
default:
uri: <%= ENV['MONGO_URL'] %>
I temporarily solved the issue by placing a:
options:
raise_not_found_error: false
in production:
Also moved out a few scripts from jobs/ folder. It worked then.
Can anyone enlighten me on this?
I think your ENV['MONGO_URL'] contains something that is not correct.
Try to use your development database in production:
production:
sessions:
default:
database: tgmd
hosts:
- localhost:27017
if it work. Check your ENV['MONGO_URL']

Resources