ksqlDB server shuts down when config, offset and status topic is changed - ksqldb

I'm running a single ksqlDB Server on embedded mode on our Kubernetes cluster and I want to add a connector.
Adding a connector produces a Request timed out on Kafka Connect exactly similar to this blog post by Robin Moffatt.
So he suggests to change the KAFKA_OFFSET_REPLICATION_FACTOR contained in his docker-compose example.
But unfortunately in our Test environment, I don't have easy access to the existing Kafka cluster (we have admins there), so I think the fastest way to go about is to instead change the:
KSQL_CONNECT_CONFIG_STORAGE_TOPIC - change to a different topic name
KSQL_CONNECT_OFFSET_STORAGE_TOPIC
KSQL_CONNECT_STATUS_STORAGE_TOPIC
KSQL_CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_STATUS_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
But when I change the topic names, I can see that new topics are created (using ksqlDB's SHOW TOPICS command), but it always shuts down and restarts forever, here are the logs:
[2021-07-22 01:27:19,889] INFO ProcessingLogConfig values:
ksql.logging.processing.rows.include = false
ksql.logging.processing.stream.auto.create = false
ksql.logging.processing.stream.name = KSQL_PROCESSING_LOG
ksql.logging.processing.topic.auto.create = false
ksql.logging.processing.topic.name =
ksql.logging.processing.topic.partitions = 1
ksql.logging.processing.topic.replication.factor = 1
(io.confluent.ksql.logging.processing.ProcessingLogConfig:372)
[2021-07-22 01:27:19,891] ERROR Aborting application start (io.confluent.ksql.rest.server.KsqlRestApplication:378)
io.confluent.ksql.rest.server.KsqlRestApplication$AbortApplicationStartException: Shutting down application during waitForPreconditions
at io.confluent.ksql.rest.server.KsqlRestApplication.waitForPreconditions(KsqlRestApplication.java:441)
at io.confluent.ksql.rest.server.KsqlRestApplication.startKsql(KsqlRestApplication.java:386)
at io.confluent.ksql.rest.server.KsqlRestApplication.startAsync(KsqlRestApplication.java:370)
at io.confluent.ksql.rest.server.MultiExecutable.doAction(MultiExecutable.java:68)
at io.confluent.ksql.rest.server.MultiExecutable.startAsync(MultiExecutable.java:42)
at io.confluent.ksql.rest.server.KsqlServerMain.tryStartApp(KsqlServerMain.java:89)
at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:64)
[2021-07-22 01:27:19,892] INFO Server up and running (io.confluent.ksql.rest.server.KsqlServerMain:90)
[2021-07-22 01:27:19,892] INFO Server shutting down (io.confluent.ksql.rest.server.KsqlServerMain:96)
[2021-07-22 01:27:19,892] INFO ksqlDB shutdown called (io.confluent.ksql.rest.server.KsqlRestApplication:498)
[2021-07-22 01:27:34,926] INFO API server stopped (io.confluent.ksql.api.server.Server:196)
[2021-07-22 01:27:34,927] INFO ksqlDB shutdown complete (io.confluent.ksql.rest.server.KsqlRestApplication:553)
I don't have anymore details, it's just that.
When I return the config, offset and status topic names to what I had at first, the ksqlDB Server starts fine, but again I'm stuck with the problem that I can't create connectors.
I have a fear that when I attempt to delete the topics manually, ksqlDB server wont be able to start properly because it keeps on finding the original config, offset and status topics I had at first.

I have solved the issue, apparently using -1 as value for:
KSQL_CONNECT_CONFIG_REPLICATION_FACTOR
KSQL_CONNECT_OFFSET_REPLICATION_FACTOR
KSQL_CONNECT_STATUS_REPLICATION_FACTOR
doesn’t work properly, the Config topic becomes 20 partitions,
when I read in the Confluent Docs it should only be 1 partition, I think that’s why the ksqlDB Server just restarts endlessly, I just need to gather the right evidences.
Turning those values to 3 (which is our Kafka broker's default rep factor config) I think solved the issue, it was hard, because no error message/s are seen, like when it doesn’t want more than 1 partition of the created Config topic.

Related

Getting tons of DecisionTaskTimedOut after scaling out the matching service of Uber cadence in docker swarm cluster

I’m tryng to run each cadence service independently so that I can scale them in and out easily.
My teams is using docker-swarm, and we’re managing everything with a Portainer UI.
So far, I’ve been able to scale the frontend service to have two replicas, but If I do the same with the matching service, I will get a lot of DecisionTaskTimedOut with a workflow execution. Eventually, the execution will finish successfully but after some long time. To have an idea, It would take 2 minutes with two matching service replicas, while it only takes 7 seconds with just one.
This is a Test environment. I’m using a dockerized cassand db (we cannot use a real one due to some budget restrictions) Maybe that’s the problem? The Docker image is configured with the following enviroment variables:
RINGPOP_BOOTSTRAP_MODE=dns
KEYSPACE=cadence
BIND_ON_IP=0.0.0.0
SKIP_SCHEMA_SETUP=false
VISIBILITY_KEYSPACE=cadence_visibility
CASSANDRA_HOSTNAME=soap_cassandra
RINGPOP_SEEDS=soap_cadence_frontend:7933,soap_cadence_history:7934,soap_cadence_worker:7939
CADENCE_HOME=/etc/cadence
SERVICES=matching
You can assume the default values for any other env var you don’t see above
The RINGPOP_SEEDS are the service names assigned to every cadence service, docker-swarm will create a DNS entry out of them as well as load balancer if there is more than 1 replica declared.
The matching service seems to start correctly, Logs:
{"level":"info","ts":"2021-02-18T22:47:36.296Z","msg":"Created RPC dispatcher and listening","service":"cadence-matching","address":"0.0.0.0:7935","logging-call-at":"rpc.go:81"},
{"level":"warn","ts":"2021-02-18T22:47:36.321Z","msg":"Failed to fetch key from dynamic config","key":"system.advancedVisibilityWritingMode","error":"unable to find key","logging-call-at":"config.go:68"},
{"level":"info","ts":"2021-02-18T22:47:36.336Z","msg":"Add new peers by DNS lookup","address":"0.0.0.0","addresses":"[0.0.0.0:7933]","logging-call-at":"clientBean.go:321"},
{"level":"info","ts":"2021-02-18T22:47:36.321Z","msg":"Creating RPC dispatcher outbound","service":"cadence-frontend","address":"0.0.0.0:7933","logging-call-at":"clientBean.go:277"},
{"level":"info","ts":"2021-02-18T22:47:36.441Z","msg":"Starting service matching","logging-call-at":"server.go:217"},
{"level":"warn","ts":"2021-02-18T22:47:36.441Z","msg":"Failed to fetch key from dynamic config","key":"matching.throttledLogRPS","error":"unable to find key","logging-call-at":"config.go:68"},
{"level":"info","ts":"2021-02-18T22:47:36.441Z","msg":"Creating RPC dispatcher outbound","service":"cadence-frontend","address":"127.0.0.1:7933","logging-call-at":"clientBean.go:277"},
{"level":"info","ts":"2021-02-18T22:47:36.442Z","msg":"Add new peers by DNS lookup","address":"127.0.0.1","addresses":"[127.0.0.1:7933]","logging-call-at":"clientBean.go:321"},
{"level":"info","ts":"2021-02-18T22:47:36.713Z","msg":"matching starting","service":"cadence-matching","logging-call-at":"service.go:90"},
{"level":"info","ts":"2021-02-18T22:47:36.734Z","msg":"RuntimeMetricsReporter started","service":"cadence-matching","logging-call-at":"runtime.go:169"},
{"level":"info","ts":"2021-02-18T22:47:36.734Z","msg":"PProf not started due to port not set","logging-call-at":"pprof.go:64"},
{"level":"info","ts":"2021-02-18T22:47:36.799Z","msg":"Current reachable members","component":"service-resolver","service":"cadence-matching","addresses":"[[::]:7935]","logging-call-at":"rpServiceResolver.go:246"},
{"level":"info","ts":"2021-02-18T22:47:36.799Z","msg":"Current reachable members","component":"service-resolver","service":"cadence-worker","addresses":"[[::]:7939]","logging-call-at":"rpServiceResolver.go:246"},
{"level":"info","ts":"2021-02-18T22:47:36.800Z","msg":"Current reachable members","component":"service-resolver","service":"cadence-frontend","addresses":"[[::]:7933]","logging-call-at":"rpServiceResolver.go:246"},
{"level":"info","ts":"2021-02-18T22:47:36.814Z","msg":"service started","service":"cadence-matching","logging-call-at":"resourceImpl.go:383"},
{"level":"info","ts":"2021-02-18T22:47:36.814Z","msg":"matching started","service":"cadence-matching","logging-call-at":"service.go:99"}
I can see the following errors in the logs when the workflow is executing:
{"level":"error","ts":"2021-02-18T22:17:07.281Z","msg":"Persistent store operation failure","service":"cadence-matching","component":"matching-engine","wf-task-list-name":"ae85d0ac1629:f8102a0f-406a-4fc7-8abf-e4b3fd66a278","wf-task-list-type":0,"store-operation":"create-task","error":"Failed to create task. TaskList: ae85d0ac1629:f8102a0f-406a-4fc7-8abf-e4b3fd66a278, taskListType: 0, rangeID: 14, db rangeID: 15","wf-task-list-name":"ae85d0ac1629:f8102a0f-406a-4fc7-8abf-e4b3fd66a278","wf-task-list-type":0,"number":1300001,"next-number":1300001,"logging-call-at":"taskWriter.go:176","stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"},
{"level":"error","ts":"2021-02-18T22:52:03.740Z","msg":"Persistent store operation failure","service":"cadence-matching","component":"matching-engine","wf-task-list-name":"8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca","wf-task-list-type":0,"store-operation":"create-task","error":"Failed to create task. TaskList: 8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca, taskListType: 0, rangeID: 16, db rangeID: 17","wf-task-list-name":"8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca","wf-task-list-type":0,"number":1500002,"next-number":1500002,"logging-call-at":"taskWriter.go:176","stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"},
{"level":"error","ts":"2021-02-18T22:10:10.971Z","msg":"Persistent store operation failure","service":"cadence-matching","component":"matching-engine","wf-task-list-name":"FeaTaskList","wf-task-list-type":1,"store-operation":"create-task","error":"Failed to create task. TaskList: FeaTaskList, taskListType: 1, rangeID: 94, db rangeID: 95","wf-task-list-name":"FeaTaskList","wf-task-list-type":1,"number":9300001,"next-number":9300001,"logging-call-at":"taskWriter.go:176","stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"},
{"level":"error","ts":"2021-02-18T22:09:53.345Z","msg":"Persistent store operation failure","service":"cadence-matching","component":"matching-engine","wf-task-list-name":"8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca","wf-task-list-type":0,"store-operation":"create-task","error":"Failed to create task. TaskList: 8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca, taskListType: 0, rangeID: 14, db rangeID: 15","wf-task-list-name":"8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca","wf-task-list-type":0,"number":1300001,"next-number":1300001,"logging-call-at":"taskWriter.go:176","stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"},
{"level":"error","ts":"2021-02-18T22:53:56.145Z","msg":"Persistent store operation failure","service":"cadence-matching","component":"matching-engine","wf-task-list-name":"8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca","wf-task-list-type":0,"store-operation":"create-task","error":"Failed to create task. TaskList: 8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca, taskListType: 0, rangeID: 17, db rangeID: 18","wf-task-list-name":"8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca","wf-task-list-type":0,"number":1600001,"next-number":1600001,"logging-call-at":"taskWriter.go:176","stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"}
The docker image version I'm currently using is: ubercadence/server:0.15.1
Is there any way to resolve this issue?
My best guess the problem is BIND_ON_IP=0.0.0.0. Each instance should use unique hostIP:Port as their address. Because it's all 0.0.0.0, every service will only work if running with one instance. Because more than instance will have conflict.
However, it's not a problem for frontend service because FE is stateles.
Matching/History will run into this problem ---
HostA register it to mathcing service with 0.0.0.0:7935, and then HostB tries to do the same. This will cause the consistent hashing ring being unstable. The tasklist ownership keeps being switched between HostA and HostB.
To resolve this issue, you need to let each instance uses its own hostIP. Like in K8s uses PodIP.
After you resolve this issue, you will see in the logs in FE/history that they successfully connect to two Matching hosts:
{"level":"info","ts":"2021-02-18T22:47:36.799Z","msg":"Current reachable members","component":"service-resolver","service":"cadence-matching","addresses":"[HostA_IP:7935, HostB_IP:7935]","logging-call-at":"rpServiceResolver.go:246"},
See example in Cadence Helm chart that how we do that for K8s: https://github.com/banzaicloud/banzai-charts/blob/87cf2946434c22cb963fea47b662ea85974ecfc0/cadence/templates/server-configmap.yaml#L82

MySQL connection pool in python?

I'm trying to process large amount of data using Python and maintaining processing status in MySQL. However, I'm surprised there is no standard connection pool for python-mysql (like HikariCP in Java).
I initially started with PyMySQL, things were great until the program ran for first few hours. After few hours, things started to fail. I was getting lot of errors like:
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on '127.0.0.1' ([Errno 99] Cannot assign requested address)")
Moreover, lot of ports were stuck in TIME_WAIT state because I'm opening and closing connections too frequently because of lack of connection pooling
/d/p/950 ❯❯❯ netstat -nt | wc -l
84752
Per this and this, I tried to set tcp_fin_timeout and ip_local_port_range, but hardly anything improved.
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 15000 65000 > /proc/sys/net/ipv4/ip_local_port_range
Then I found out that MySQL provides mysql.connector which comes with pooling functionality. After doing all that performance actually deteriorated. More processes started to get failed. I'm using Python's multiprocessing module to simultaneously run 29 processes(multiprocessing.Pool picked this no by default) on a 24 core machine. Following was the code, of course I was using .my.cnf to pass all the credential to avoid committing them to git :
import mysql.connector
from mysql.connector import pooling
conn_pool = pooling.MySQLConnectionPool(pool_name="mypool1",
pool_size=pooling.CNX_POOL_MAXSIZE,
option_files=MYSQL_CONFIG,
option_groups=MYSQL_GROUP_NODE1,
allow_local_infile=True)
conn = conn_pool.get_connection()
Finally, reverted back to old code. Still using PyMySQL and though errors are less frequent it is still causing a major problem. I looked at SQLAlchemy and couldn't really found much of a documentation around pooling.
I'm wondering how's everyone else dealing with mysql-python connection pooling issue? I really believe there should be something out there so that I don't have to reinvent the wheel.
Any pointers are much appreciated.
DBUtils implements MySQL (and generally claims to support abritrary DB-API 2 compliant database interfaces) user-sized connection pool PooledDB, thead-mapped pool PersistentDB and SteadyDB (see functionality section). The latter should fit your case where multiprocessing.Pool creates worker processes with managed persistent database connection each. It is described as:
DBUtils.SteadyDB is a module implementing "hardened" connections to a database, based on ordinary connections made by any DB-API 2 database module. A "hardened" connection will transparently reopen upon access when it has been closed or the database connection has been lost or when it is used more often than an optional usage limit.
You can use it with PyMySQL like:
import pymysql
from DBUtils.SteadyDB import connect
db = connect(
creator = pymysql, # the rest keyword arguments belong to pymysql
user = 'guest', password = '', database = 'name',
autocommit = True, charset = 'utf8mb4',
cursorclass = pymysql.cursors.DictCursor)
Also see this related question for more examples.

How to add a node to a running neo4j cluster?

Suppose I have a single running neo4j node configured for HA mode. Relevant config lines are, I believe, are:
"ha.cluster_server" : "hostname:5003",
"ha.initial_hosts" : "hostname:5003",
Is it possible to add another node that will, upon joining, form a 2-node cluster with the currently running one?
I should clarify that I tried doing it by the books, i.e. configuring the second member like this:
"ha.cluster_server" : "hostname:5004",
"ha.initial_hosts" : "hostname:5004,hostname:5003",
But the second member just hangs in an UNKNOWN state (transitionioning to slave, I guess).
First one server is not a cluster!
It should be possible. Configuration of second server should look like
ha.server_id=2 #different number then you have on first server
ha.initial_hosts=first_server:5003,second_server:5003
e.g.
first server
neo4j-server.properties
org.neo4j.server.database.mode=HA
neo4j.properties
ha.server_id=1
ha.initial_hosts=first_host:5001
ha.cluster_server=first_host:5001
ha.server=first_host:6001
second server
neo4j-server.properties
org.neo4j.server.database.mode=HA
neo4j.properties
ha.server_id=2 #different number then you have on first server
ha.initial_hosts=first_host:5001,second_host:5001
ha.cluster_server=second_host:5001
ha.server=second_host:6001

Quartz.net not firing on remote server

I've implemented quartz.net in windows service to run tasks. And everything works fine on local workstation. But once it's deployed to remote win server host, it just hangs after initialization.
ISchedulerFactory schedFact = new StdSchedulerFactory();
// get a scheduler
var _scheduler = schedFact.GetScheduler();
// Configuration of triggers and jobs
var trigger = (ICronTrigger)TriggerBuilder.Create()
.WithIdentity("trigger1", "group1")
.WithCronSchedule(job.Value)
.Build();
var jobDetail = JobBuilder.Create(Type.GetType(job.Key)).StoreDurably(true)
.WithIdentity("job1", "group1").Build();
var ft = _scheduler.ScheduleJob(jobDetail, trigger);
Everything seems to be standard. I have private static pointer to scheduler, logging process stops right after jobs are initialized and added to scheduler. Nothing else happens after.
I'd appreciate any advices.
Thanks.
PS:
Found some strange events in event viewer mb according quartz.net:
Restart Manager - Starting session 2 - ‎2012‎-‎07‎-‎09T15:14:15.729569700Z.
Restart Manager - Ending session 2 started ‎2012‎-‎07‎-‎09T15:14:15.729569700Z.
Based on your question and the additional info you gave in comments, I would guess there is something going wrong in the onStart method of your service.
Here are some things you can do to help figure out and solve the problem:
Place the code in your onStart method in a try/catch block, and try to install and start the service. Then check windows logs to see if it was installed correctly, started correctly, etc.
The fact that restart manager is running leads me to believe that your service may be dependent on a process which is already in use. Make sure that any dependencies of your service are closed before installing it.
This problem can also be caused by putting data-intense or long running operations in your onStart method. Make sure that you keep this kind of code out of onStart.
I had a similar problem to this and it was caused by having dots/periods in the assembly name e.g. Project.Update.Service. When I changed it to ProjectUpdateService it worked fine.
Strangely it always worked on the development machine. Just never on the remote machine.
UPDATE: It may have been the length of the service that has caused this issue. By removing the dots I shortened the service name. It looks like the maximum length is 25 characters.

How to monitor elasticsearch using nagios

I would like to monitor elasticsearch using nagios.
Basiclly, I want to know if elasticsearch is up.
I think I can use the elasticsearch Cluster Health API (see here)
and use the 'status' that I get back (green, yellow or red), but I still don't know how to use nagios for that matter ( nagios is on one server and elasticsearc is on another server ).
Is there another way to do that?
EDIT :
I just found that - check_http_json. I think I'll try it.
After a while - I've managed to monitor elasticsearch using the nrpe.
I wanted to use the elasticsearch Cluster Health API - but I couldn't use it from another machine - due to security issues...
So, in the monitoring server I created a new service - which the check_command is check_command check_nrpe!check_elastic. And now in the remote server, where the elasticsearch is, I've editted the nrpe.cfg file with the following:
command[check_elastic]=/usr/local/nagios/libexec/check_http -H localhost -u /_cluster/health -p 9200 -w 2 -c 3 -s green
Which is allowed, since this command is run from the remote server - so no security issues here...
It works!!!
I'll still try this check_http_json command that I posted in my qeustion - but for now, my solution is good enough.
After playing around with the suggestions in this post, I wrote a simple check_elasticsearch script. It returns the status as OK, WARNING, and CRITICAL corresponding to the "status" parameter in the cluster health response ("green", "yellow", and "red" respectively).
It also grabs all the other parameters from the health page and dumps them out in the standard Nagios format.
Enjoy!
Shameless plug: https://github.com/jersten/check-es
You can use it with ZenOSS/Nagios to monitor cluster health, data indices, and individual node heap usage.
You can use this cool Python script for monitoring your Elasticsearch cluster. This script check your IP:port for Elasticsearch status. This one and more Python script for monitoring Elasticsearch can be found here.
#!/usr/bin/python
from nagioscheck import NagiosCheck, UsageError
from nagioscheck import PerformanceMetric, Status
import urllib2
import optparse
try:
import json
except ImportError:
import simplejson as json
class ESClusterHealthCheck(NagiosCheck):
def __init__(self):
NagiosCheck.__init__(self)
self.add_option('H', 'host', 'host', 'The cluster to check')
self.add_option('P', 'port', 'port', 'The ES port - defaults to 9200')
def check(self, opts, args):
host = opts.host
port = int(opts.port or '9200')
try:
response = urllib2.urlopen(r'http://%s:%d/_cluster/health'
% (host, port))
except urllib2.HTTPError, e:
raise Status('unknown', ("API failure", None,
"API failure:\n\n%s" % str(e)))
except urllib2.URLError, e:
raise Status('critical', (e.reason))
response_body = response.read()
try:
es_cluster_health = json.loads(response_body)
except ValueError:
raise Status('unknown', ("API returned nonsense",))
cluster_status = es_cluster_health['status'].lower()
if cluster_status == 'red':
raise Status("CRITICAL", "Cluster status is currently reporting as "
"Red")
elif cluster_status == 'yellow':
raise Status("WARNING", "Cluster status is currently reporting as "
"Yellow")
else:
raise Status("OK",
"Cluster status is currently reporting as Green")
if __name__ == "__main__":
ESClusterHealthCheck().run()
I wrote this a million years ago, and it might still be useful: https://github.com/radu-gheorghe/check-es
But it really depends on what you want to monitor. The above measures:
if Elasticsearch responds to HTTP
if ingestion rate drops under the defined levels
if total number of documents drops the defined levels
But of course there's much more that might be interesting. From query time to JVM heap usage. We wrote a blog post about the most important ones here: https://sematext.com/blog/top-10-elasticsearch-metrics-to-watch/
Elasticsearch has APIs for all these, so you may be able to use a generic check_http_json to get the needed metrics. Alternatively, you may want to use something like Sematext Monitoring for Elasticsearch, which gets these metrics out of the box, then forward threshold/anomaly alerts to Nagios. (disclosure: I work for Sematext)

Resources