F5 LTM config through tmsh - At least one monitor - f5

f5-LTM version 11.6
Hi,
I'm looking for the syntax to create a pool via tmsh
with 2 monitors (monitor_A, monitor_B)
with 'Availability Requirement' set to 'At Least...' '1' Health Monitor(s)
checked https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-tmsh-reference-11-6-0.html
to no avail.
It only shows the following syntax:
create pool ... monitor [name]
thanks in advance

Here's an example:
tmsh create ltm pool mypool \
monitor min 1 of {http https} \
members add { 192.168.103.10:80 192.168.103.11:80}

Related

How to change maximum memory allowed for function in OpenWhisk

I have an OpenWhisk deployment on a Kubernetees cluster that was done using [1]. I know I can change the memory limit for a function by adding --memory x when creating the function. However, if I try to set a value large than 512MB I get the following error.
requirement failed: memory 812 MB exceeds allowed threshold of 536870912 B (code 10543)
I assume this is a configuration set during the setup or within the code. Is there a way to increase this limit to a custom value? if so what is the configuration I need to perform in order to do this?
[1] https://github.com/apache/incubator-openwhisk-deploy-kube
The memory limits are configurable for your deployment as of this patch https://github.com/apache/incubator-openwhisk/pull/3148. You can set the max memory in your deployment to suite your purposes.
You have to set the environment variable CONFIG_whisk_memory_max=1073741824 on the invoker with a higher value for example 1GB
One way to edit the invoker deployment file
https://github.com/apache/incubator-openwhisk-deploy-kube/blob/master/kubernetes/invoker/invoker-dcf.yml#L74
- name: "CONFIG_whisk_memory_max"
value: "1073741824"
With ansible-playbook, you can specify an option with -e limit_action_memory_max=1073741824 when you deploy openwhisk. The whole command might be like
ansible-playbook openwhisk.yml -e limit_action_memory_max=1073741824 # with other options like '-e invoker_user_memory=25600m'

Don't have access to Pureftp using Unix credentials

I've been struggling with PureFTP on my Orange Pi Zero (Armbian 5.38, ubuntu), I don't know what should I do to enter with system credentials, I have "no" on PAMAuthentication and "yes" on UnixAuthentication, I dont know why it takes me as "Anonymous" (ANONY. OFF).
I'm not using pure-ftpd.conf (That's getting me off) and I just want to leave as simple as it seems to work. I don't want to use Virtual Users, so pure-pw didn't be configured...
I think that could be by the TLS option, I'm trying to set it "pure-ftpd -Y 0" but frozen my ssh connection... Why? there are similar commands of PureFTP that do the same behavior, the temperature is okay (33ÂșC)
Thanks
Finally RESOLVED!
Forget to know what was inside auth/70pam or auth/65unix, that was my error... (contains YES or NO)
Once changed 65unix to "NO" and 70pam to "YES"
Then on conf/PAMAuthentication set to "YES", and UnixAuthentication to "NO" (Because PAMAuthentication includes a
module with Unix authentication , by default)
Finally It didn't was what I'm looking for (Because I was looking for an user with chroot only on 1 directory), so I created Pure-FTP virtual users (First create an user for Linux (ftpuser) and then you can create multiple "virtual users" through pure-pw command, simple once you understand virtual users of pureftp).
Hope it helps!

Work progress report in weekly basis by the users in jira

How can I view the task done/in progress in weekly basis by the users in jira??
Thanks in advance.
Your question isn't very clear, what do you mean by done/in progress ? the status of the issue? and by saying How can I view what exactly do you mean? see them from Jira? send a weekly mail?
Anyway, in case that by saying done/in progress you mean that the issue is closed/unclosed, and you are looking to the right JQL query, than:
Closed last week:
project = Development and status = Closed and updated >= "-7d"
Worked on during last week, but not closed:
project = Development and status != Closed and updated >= "-7d"
Opened last week, but not closed:
project = Development and status != Closed and created >= "-7d"
and so on.. For more queries option visit JIRA Advanced Searching. If you have more questions feel free to ask.
I wrote a simple cli tool jira-report, that queries your jira and prints weekly report to console:
$ jira-report
Jira site address: https://jira.company.com
Username for 'https://jira.company.com': admin
Password for 'https://jira.company.com':
Connecting to 'https://jira.company.com'. Pls wait...
What was [admin] doing:
Created: 2
WFM-7180 - Provide static context for log property in BasicHashAnalyzer
TST-5862 - Unable to install Nginx on HP-UX with Java 6
Resolved: 8
GSM-364 - Migration of existing scenario
WFM-5865 - NullPointerException while finding categories
TST-5864 - Some NGinx installation improvements
TST-5863 - NGinx minimal dependency
SDK-7139 - Move common interfaces and classes from into individual jar
SDK-7138 - Move common interfaces and classes from into individual dll
TST-7111 - Event.getDonotNotify doesn't indicate about agent's state
TST-6985 - TST classes should have static Log fields
Reopened: 0
Closed: 5
TST-6943 - Remove redundant org.apache.log4j dependency from common part
TST-5862 - Unable to install NGinx on HP-UX with Java 6
TST-5857 - Put back support for Jdk 1.6
TST-5840 - NGinx fails to handle interaction initiated
GSM-364 - Migration of existing units
Enjoy it!

How to monitor elasticsearch using nagios

I would like to monitor elasticsearch using nagios.
Basiclly, I want to know if elasticsearch is up.
I think I can use the elasticsearch Cluster Health API (see here)
and use the 'status' that I get back (green, yellow or red), but I still don't know how to use nagios for that matter ( nagios is on one server and elasticsearc is on another server ).
Is there another way to do that?
EDIT :
I just found that - check_http_json. I think I'll try it.
After a while - I've managed to monitor elasticsearch using the nrpe.
I wanted to use the elasticsearch Cluster Health API - but I couldn't use it from another machine - due to security issues...
So, in the monitoring server I created a new service - which the check_command is check_command check_nrpe!check_elastic. And now in the remote server, where the elasticsearch is, I've editted the nrpe.cfg file with the following:
command[check_elastic]=/usr/local/nagios/libexec/check_http -H localhost -u /_cluster/health -p 9200 -w 2 -c 3 -s green
Which is allowed, since this command is run from the remote server - so no security issues here...
It works!!!
I'll still try this check_http_json command that I posted in my qeustion - but for now, my solution is good enough.
After playing around with the suggestions in this post, I wrote a simple check_elasticsearch script. It returns the status as OK, WARNING, and CRITICAL corresponding to the "status" parameter in the cluster health response ("green", "yellow", and "red" respectively).
It also grabs all the other parameters from the health page and dumps them out in the standard Nagios format.
Enjoy!
Shameless plug: https://github.com/jersten/check-es
You can use it with ZenOSS/Nagios to monitor cluster health, data indices, and individual node heap usage.
You can use this cool Python script for monitoring your Elasticsearch cluster. This script check your IP:port for Elasticsearch status. This one and more Python script for monitoring Elasticsearch can be found here.
#!/usr/bin/python
from nagioscheck import NagiosCheck, UsageError
from nagioscheck import PerformanceMetric, Status
import urllib2
import optparse
try:
import json
except ImportError:
import simplejson as json
class ESClusterHealthCheck(NagiosCheck):
def __init__(self):
NagiosCheck.__init__(self)
self.add_option('H', 'host', 'host', 'The cluster to check')
self.add_option('P', 'port', 'port', 'The ES port - defaults to 9200')
def check(self, opts, args):
host = opts.host
port = int(opts.port or '9200')
try:
response = urllib2.urlopen(r'http://%s:%d/_cluster/health'
% (host, port))
except urllib2.HTTPError, e:
raise Status('unknown', ("API failure", None,
"API failure:\n\n%s" % str(e)))
except urllib2.URLError, e:
raise Status('critical', (e.reason))
response_body = response.read()
try:
es_cluster_health = json.loads(response_body)
except ValueError:
raise Status('unknown', ("API returned nonsense",))
cluster_status = es_cluster_health['status'].lower()
if cluster_status == 'red':
raise Status("CRITICAL", "Cluster status is currently reporting as "
"Red")
elif cluster_status == 'yellow':
raise Status("WARNING", "Cluster status is currently reporting as "
"Yellow")
else:
raise Status("OK",
"Cluster status is currently reporting as Green")
if __name__ == "__main__":
ESClusterHealthCheck().run()
I wrote this a million years ago, and it might still be useful: https://github.com/radu-gheorghe/check-es
But it really depends on what you want to monitor. The above measures:
if Elasticsearch responds to HTTP
if ingestion rate drops under the defined levels
if total number of documents drops the defined levels
But of course there's much more that might be interesting. From query time to JVM heap usage. We wrote a blog post about the most important ones here: https://sematext.com/blog/top-10-elasticsearch-metrics-to-watch/
Elasticsearch has APIs for all these, so you may be able to use a generic check_http_json to get the needed metrics. Alternatively, you may want to use something like Sematext Monitoring for Elasticsearch, which gets these metrics out of the box, then forward threshold/anomaly alerts to Nagios. (disclosure: I work for Sematext)

Windows Service Starts then Stops

I have a Windows Service that I inherited from a departed developer. The Windows Service is running just fine in the QA environment. When I install the service and run it locally, I receive this error:
Service cannot be started. System.InvalidOperationException: The requested Performance Counter is not a custom counter, it has to be initialized as ReadOnly.
Here is the code:
ExternalDataExchangeService exchangeService = new ExternalDataExchangeService();
workflowRuntime.AddService(exchangeService);
workflowRuntime.AddService(new SqlTrackingService(AppContext.SqlConnectionImportLog));
ChallengerWorkflowService challengerWorkflowService = new ChallengerWorkflowService();
challengerWorkflowService.SendDataEvent += new EventHandler<SendDataEventArgs>(challengerWorkflowService_SendDataEvent);
workflowRuntime.AddService(challengerWorkflowService);
workflowRuntime.StartRuntime(); <---- Exception is thrown here.
Check for installer code. Often you will find counters are created within an installation (which is going to of been run under admin privledges on client site) and the code then uses them as though they exist - but will not try create them because they do not expect to have the permissions.
If you just get the source and then try run it, the counters / counter classes do not exist so you fall over immediately. (Alternatively check whether the counter exists / you have local admin if they wrote the code to create it in the service.)
Seen it before so mentioned it.
Attach Debugger and break on InvalidOperationException (first-chance, i.e. when thrown)?

Resources