Elasticsearch on Docker - Failed to create enrollment token when generating API key - docker

Going through the Elasticsearch docs for setting up Elasticsearch/Kibana with Docker, but I'm getting several errors. I follow the steps exactly. I'm running this on an Ubuntu 20.04 EC2 instance. What am I doing wrong?
Here's what I did:
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.0.0
docker pull docker.elastic.co/kibana/kibana:8.0.0
docker network create elastic
docker run --name es01 --net elastic -p 9200:9200 -it docker.elastic.co/elasticsearch/elasticsearch:8.0.0
After step 4, Elasticsearch says:
A password is generated for the elastic user and output to the terminal, plus enrollment tokens for enrolling Kibana and adding additional nodes to your cluster.
I get neither. Instead, I get these error logs:
{"#timestamp":"2022-02-24T22:28:24.318Z", "log.level":"ERROR", "message":"Failed to create enrollment token when generating API key", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[a178a1e9d98b][generic][T#4]","log.logger":"org.elasticsearch.xpack.security.enrollment.InternalEnrollmentTokenGenerator","elasticsearch.cluster.uuid":"mC4ceJ2nT7i9-QF6V537Zg","elasticsearch.node.id":"IthIKocaSACunHatZxePPw","elasticsearch.node.name":"a178a1e9d98b","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException","error.message":"[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][fobYLX8BZdXU5J2_mb_p], source[{\"doc_type\":\"api_key\",\"creation_time\":1645741644265,\"expiration_time\":1645743444265,\"api_key_invalidated\":false,\"api_key_hash\":\"{PBKDF2}10000$PbPNTKm9i5HBuHO+W9snM/+0C1sf4OGjE3xC1m3xKew=$oQXD/UOSgR/hDNHz1IgNKoVOG4Zi0LkiPQW3IMPnRtA=\",\"role_descriptors\":{\"create_enrollment_token\":{\"cluster\":[\"cluster:admin/xpack/security/enroll/node\"],\"indices\":[],\"applications\":[],\"run_as\":[],\"metadata\":{},\"type\":\"role\"}},\"limited_by_role_descriptors\":{\"superuser\":{\"cluster\":[\"all\"],\"indices\":[{\"names\":[\"*\"],\"privileges\":[\"all\"],\"allow_restricted_indices\":false},{\"names\":[\"*\"],\"privileges\":[\"monitor\",\"read\",\"view_index_metadata\",\"read_cross_cluster\"],\"allow_restricted_indices\":true}],\"applications\":[{\"application\":\"*\",\"privileges\":[\"*\"],\"resources\":[\"*\"]}],\"run_as\":[\"*\"],\"metadata\":{\"_reserved\":true},\"type\":\"role\"}},\"name\":\"enrollment_token_API_key_fYbYLX8BZdXU5J2_mb_p\",\"version\":8000099,\"metadata_flattened\":null,\"creator\":{\"principal\":\"_xpack_security\",\"full_name\":null,\"email\":null,\"metadata\":{},\"realm\":\"__attach\",\"realm_type\":\"__attach\"}}]}] blocking until refresh]","error.stack_trace":"org.elasticsearch.action.UnavailableShardsException: [.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][fobYLX8BZdXU5J2_mb_p], source[{\"doc_type\":\"api_key\",\"creation_time\":1645741644265,\"expiration_time\":1645743444265,\"api_key_invalidated\":false,\"api_key_hash\":\"{PBKDF2}10000$PbPNTKm9i5HBuHO+W9snM/+0C1sf4OGjE3xC1m3xKew=$oQXD/UOSgR/hDNHz1IgNKoVOG4Zi0LkiPQW3IMPnRtA=\",\"role_descriptors\":{\"create_enrollment_token\":{\"cluster\":[\"cluster:admin/xpack/security/enroll/node\"],\"indices\":[],\"applications\":[],\"run_as\":[],\"metadata\":{},\"type\":\"role\"}},\"limited_by_role_descriptors\":{\"superuser\":{\"cluster\":[\"all\"],\"indices\":[{\"names\":[\"*\"],\"privileges\":[\"all\"],\"allow_restricted_indices\":false},{\"names\":[\"*\"],\"privileges\":[\"monitor\",\"read\",\"view_index_metadata\",\"read_cross_cluster\"],\"allow_restricted_indices\":true}],\"applications\":[{\"application\":\"*\",\"privileges\":[\"*\"],\"resources\":[\"*\"]}],\"run_as\":[\"*\"],\"metadata\":{\"_reserved\":true},\"type\":\"role\"}},\"name\":\"enrollment_token_API_key_fYbYLX8BZdXU5J2_mb_p\",\"version\":8000099,\"metadata_flattened\":null,\"creator\":{\"principal\":\"_xpack_security\",\"full_name\":null,\"email\":null,\"metadata\":{},\"realm\":\"__attach\",\"realm_type\":\"__attach\"}}]}] blocking until refresh]\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1076)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:872)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1031)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263)\n\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:651)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:717)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
{"#timestamp":"2022-02-24T22:28:47.612Z", "log.level":"ERROR", "message":"error downloading geoip database [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[a178a1e9d98b][generic][T#6]","log.logger":"org.elasticsearch.ingest.geoip.GeoIpDownloader","elasticsearch.cluster.uuid":"mC4ceJ2nT7i9-QF6V537Zg","elasticsearch.node.id":"IthIKocaSACunHatZxePPw","elasticsearch.node.name":"a178a1e9d98b","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException","error.message":"[.geoip_databases][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.geoip_databases][0]] containing [index {[.geoip_databases][GeoLite2-ASN.mmdb_0_1645741637264], source[n/a, actual length: [1mb], max length: 2kb]}]]","error.stack_trace":"org.elasticsearch.action.UnavailableShardsException: [.geoip_databases][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.geoip_databases][0]] containing [index {[.geoip_databases][GeoLite2-ASN.mmdb_0_1645741637264], source[n/a, actual length: [1mb], max length: 2kb]}]]\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1076)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:872)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1031)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263)\n\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:651)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:717)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
{"#timestamp":"2022-02-24T22:28:54.310Z", "log.level":"ERROR", "message":"Failed to generate credentials for the elastic built-in superuser", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[a178a1e9d98b][generic][T#7]","log.logger":"org.elasticsearch.xpack.security.InitialNodeSecurityAutoConfiguration","elasticsearch.cluster.uuid":"mC4ceJ2nT7i9-QF6V537Zg","elasticsearch.node.id":"IthIKocaSACunHatZxePPw","elasticsearch.node.name":"a178a1e9d98b","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException","error.message":"[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][reserved-user-elastic], source[{\"password\":\"ff1DWkSBw4Cju0b8U7PM\",\"enabled\":true,\"type\":\"reserved-user\"}]}] and a refresh]","error.stack_trace":"org.elasticsearch.action.UnavailableShardsException: [.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][reserved-user-elastic], source[{\"password\":\"ff1DWkSBw4Cju0b8U7PM\",\"enabled\":true,\"type\":\"reserved-user\"}]}] and a refresh]\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1076)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:872)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1031)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345)\n\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263)\n\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:651)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:717)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
They are long messages. Here's the error portion of the above:
"Failed to create enrollment token when generating API key"
"error downloading geoip database [GeoLite2-ASN.mmdb]"
"Failed to generate credentials for the elastic built-in superuser"
"error downloading geoip database [GeoLite2-City.mmdb]"

I assume your problem is because of the network, since you got failed when attempting to downloading geoip database and you use docker to run it.
https://www.elastic.co/blog/docker-networking
When running Elasticsearch, you will need to ensure it publishes to an
IP address that is reachable from outside the container; this can be
configured via the setting network.publish_host.

I did not have enough storage space.
A gentleman on the Elasticsearch Slack channel was kind enough to point out that this was the real culprit:
"error.type": "org.elasticsearch.action.UnavailableShardsException", "error.message": "[.security-7][0] primary shard is not active Timeout: [1m],
I looked at my available host system storage space and found there was only 17G available! Cleaning up my Trash bin fixed the issue. Works now. Hopefully this helps someone else!

Related

pgsync cannot connect to Amazon PostgreSQL database on RDS

I am trying to synchronize data from my local dev database to a test DB running on Amazon RDS using the pgsync gem.
My .pgsync.yml page is simple:
from: postgres://localhost:5432/imports_development?sslmode=require
to: [See attempts below]
exclude:
- [A few tables]
I have tried many approaches but none of them are working. Here are all approaches and the error messages I've received:
to: postgres://awsuser:mypassword#imports-test.abcdefg.us-east-1.rds.amazonaws.com/postgres?sslca=config/rds-combined-ca-bundle.pem
=> invalid URI query parameter: "sslca"
to: postgres://awsuser:mypassword#imports-test.abcdefg.us-east-1.rds.amazonaws.com
=> connection to server at "52.4.150.10", port 5432 failed: FATAL: database "awsuser" does not exist
to: $(heroku config:get DATABASE_URL)
=> invalid URI parameter: "sslca"
to: imports-test.abcdefg.us-east-1.rds.amazonaws.com
=> connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: database "imports-test.abcdefg.us-east-1.rds.amazonaws.com" does not exist
These are the credentials used by database.yml for the RDS database:
rds_db_name: postgres
rds_username: awsuser
rds_password: mypassword
rds_hostname: imports-test.abcdefg.us-east-1.rds.amazonaws.com
rds_port: 5432
I can connect to both databases with rails console, so it should just be a matter of getting the above statements right. What is missing here?
Have you tried postgres://awsuser:mypassword#imports-test.abcdefg.us-east-1.rds.amazonaws.com/postgres ? Attempt no.2 seems to work, but doesn't include the DB name, based on the error.
Roughly the URI is in the format:
postgres://{USER}:{PASS}#{HOSTNAME}/{DBNAME}
The ?sslca=123 and other options AFTER ? (the query string) is meant as options. Not all clients support all options here.

Rails CQL cannot connect to AWS Keyspaces (AWS Cassandra)

I am trying to connect from a Ruby on Rails application to AWS Keyspaces (AWS Cassandra), but I cannot manage to do it. I use the cequel gem and generated the config/cequel.yml which contains a similar thing to the following:
development:
host: "CONTACT_POINT"
username: "USER"
password: "PASS"
port: 9142
keyspace: key_development
max_retries: 3
retry_delay: 0.5
newrelic: true
ssl: true
server_cert: 'config/certs/AmazonRootCA1.pem'
replication:
class: NetworkTopologyStrategy
datacenter1: 3
datacenter2: 2
durable_writes: false
(Credentials where used in another app and they work which is working as expected.)
when I try to run:
rake cequel:keyspace:create
I get the following errors:
Cassandra::Errors::NoHostsAvailable: All attempted hosts failed: x.xxx.xxx.xxx (Cassandra::Errors::ServerError: Internal Server Error)
Set the dc to us-east-1 . drop the replication definition.

Can we change connection dynamicly to Elasticsearch server on Rails App at run time

I have 2 elasticsearch servers. On my rails app, can I change connection to the Elasticsearch servers at run time?
For example,
- If user 1 log in the app, it should connect to elasticsearch server 1
- If user 2 log in the app, it should connect to elasticsearch server 2
Thanks
You can use randomize_hosts when creating connection
args = {
hosts: "https://host1.local:9091,https://host2.local:9091",
adapter: :httpclient,
logger: (Rails.env.development? || Rails.env.test?) ? Rails.logger : nil,
reload_on_failure: false,
randomize_hosts: true,
request_timeout: 5
}
client = Elasticsearch::Client.new(args)
Randomize hosts doc
Here you can read about a different host selection strategy than round robin. You could implement your own ideas.

How to override or disable Postgrex timeout setting: 15 seconds?

Working on an Elixir app. There's a Scraper function that copies data from a Google Spreadsheet into a postgres database via the Postgrex driver. The connection through the Google API works fine, but the function always times out after 15 seconds.
01:48:36.654 [info] Running MyApp.Endpoint with Cowboy using http://localhost:80
Interactive Elixir (1.6.4) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Scraper.update
542
iex(2)> 01:48:55.889 [error] Postgrex.Protocol (#PID<0.324.0>) disconnected: ** (DBConnection.ConnectionError) owner #PID<0.445.0> timed out because it owned the connection for longer than 15000ms
I have tried changing the 15_000 ms timeout setting everywhere in the source, but it seems the setting has been compiled into binary. I am not an erlang/elixir developer, just helping a client install the app for the purposes of demoing. My question is:
How can I recompile the Postgrex driver with the modified timeout setting?
Is there another way to override this setting, or disable the timeout altogether? I have tried find-replace of basically every instance of "15" in the source.
When issuing a query with postgrex, the last argument can be a keyword list of options.
Postgrex.query!(pid, "AN SQL STATEMENT;", [], timeout: 50_000, pool_timeout: 40_000)
https://hexdocs.pm/postgrex/Postgrex.html#query/4
config :my_app, MyApp.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
database: "my_app_dev",
hostname: "localhost",
timeout: 600_000,
ownership_timeout: 600_000,
pool_timeout: 600_000
Look at timeout and ownership_timeout. These values are set to 600 seconds. And probably not of them are necessary.
Also I want to say that once I had to remove everything from _build and recompile an application to have this values actually applied.

Configuring Backup gem in Rails 5.2 - Performing backup of PostgreSQL database

I would like to perform a regular backup of a PostgreSQL database, my current intention is to use the Backup and Whenever gems. I am relatively new to Rails and Postgres, so there is every chance I am making a very simple mistake...
I am currently trying to setup the process on my development machine (MAC), but keep getting an error when trying to connect to the database.
In the terminal window, I have performed the following to check the details of my database and connection:
psql -d my_db_name
my_db_name=# \conninfo
You are connected to database "my_db_name" as user "my_MAC_username" via socket in "/tmp" at port "5432".
\q
I have also manually created a backup of the database:
pg_dump -U my_MAC_username -p 5432 my_db_name > name_of_backup_file
However, when I try to repeat this within db_backup.rb (created by the Backup gem) I get the following error:
[2018/10/03 19:59:00][error] Model::Error: Backup for Description for db_backup (db_backup) Failed!
--- Wrapped Exception ---
Database::PostgreSQL::Error: Dump Failed!
Pipeline STDERR Messages:
(Note: may be interleaved if multiple commands returned error messages)
pg_dump: [archiver (db)] connection to database "my_db_name" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/pg.sock/.s.PGSQL.5432"?
The following system errors were returned:
Errno::EPERM: Operation not permitted - 'pg_dump' returned exit code: 1
The contents of my db_backup.rb:
Model.new(:db_backup, 'Description for db_backup') do
##
# PostgreSQL [Database]
#
database PostgreSQL do |db|
# To dump all databases, set `db.name = :all` (or leave blank)
db.name = "my_db_name"
db.username = "my_MAC_username"
#db.password = ""
db.host = "localhost"
db.port = 5432
db.socket = "/tmp/pg.sock"
# When dumping all databases, `skip_tables` and `only_tables` are ignored.
# db.skip_tables = ["skip", "these", "tables"]
# db.only_tables = ["only", "these", "tables"]
# db.additional_options = ["-xc", "-E=utf8"]
end
end
Please could you suggest what I need to do to resolve this issue and perform the same backup through the db_backup.rb code
In case someone else gets stuck in a similar situation, the key to unlocking this problem was the lines:
psql -d my_db_name
my_db_name=# \conninfo
I realised that I needed to change db.socket = "/tmp/pg.sock" to db.socket = "/tmp", which seems to have resolved the issue.
However, I don't understand why the path on my computer differs to the default as I didn't do anything to customise the installation of any gems or the Postgres App

Resources