Query FlexLM server to get the feature usage limit for a user - flexlm

The FlexLM (or FlexNet) server can be configured with following option:
MAX num_lic feature[:keyword=value] type {list | group_name}
The standard command (lmutil lmstat -f ) gives only the number of available licenses and the users that are using them.
Using lmutil is there a way to know how many maximum licenses can be used by a particular user ?
Thank you

From the documentation and my experience the answer is 'no'. You can only see the usable increments associated with a user if the `RESERVE' option is used. Then the increments are seen like used, even if the user don't use it.
For a group (I have not example with USER), you can have something like :
Users of a-increment: (Total of 188 licenses issued; Total of 160 licenses in use)
"a-increment" v2015.1231, vendor: daemon-vendor
floating license
4 RESERVATIONs for GROUP A-GROUP_Group (license-server/27000)

I was wondering if you would receive better results when using: "lmutil" lmstat -a -c # -i
We at OpenLM (www.openlm.com) use this to monitor reserved licenses.

Related

Using MAX for all features in OPTIONS file with Flexlm licensing

I am using OPTIONS file to limit who gets access to the licenses and how many number of licenses for each user.
I can include all features for a User or Group by using IP address wild card as per below
INCLUDEALL INTERNET 59.98.121.*
But for limiting the number of licenses I don't see any other option except MAX.
For MAX line I don't see any option to specify all features in the license file. I have to mention one line for each feature
MAX 5 feature_name INTERNET 59.98.121.*
I have 100 groups and 500 feature, which means I will have to have 50,000 lines for MAX in the options file.
Is there a alternative way of defining the limit? Or can I exclude feature_name and it will work for all features?
Excluding feature_name in a MAX statement is not supported by the FlexLM syntax and I have verified (on v11.16.4.0) that a MAX line without it is ignored by the system.
A solution could be to create a package with the features as components, in the license file:
PACKAGE package_name vendor_name COMPONENTS="feature1 feature2"
and then use the MAX statement with the package name (instead of the individual features) in the options file:
MAX 2 package_name HOST_GROUP hostgroup_name
This way you would have "only" 100 MAX statements to prepare.

How do I find if my Wi-Fi card supports MIMO (Multiple Input Multiple Output)?

I want to know if my Wi-Fi card supports MIMO (Multiple Input Multiple Output), and specifically how to find the number of antennas.
Is there a command I can run to find out?
If you're using windows type in command line: netsh wlan show all | find /I “MIMO”.
If you see MU-MIMO : Supported then it means yes.
I'm not sure how to do this in Linux aside from checking network card model and looking at technical specification. That will give you 100% correct answer.
But You can try though this: iw phy | grep index; you will see something like this:
HT TX/RX MCS rate indexes supported: 0-15
If you see the index above 7 that means your card supports MIMO. Why is that ?
MIMO requires at least two antennas to work (means two spatial streams of data) and this table explains index / streams relation.

How to define Alerts with exception in InfluxDB/Kapacitor

I'm trying to figure out the best or a reasonable approach to defining alerts in InfluxDB. For example, I might use the CPU batch tickscript that comes with telegraf. This could be setup as a global monitor/alert for all hosts being monitored by telegraf.
What is the approach when you want to deviate from the above setup for a host, ie instead of X% for a specific server we want to alert on Y%?
I'm happy that a distinct tickscript could be created for the custom values but how do I go about excluding the host from the original 'global' one?
This is a simple scenario but this needs to meet the needs of 10,000 hosts of which there will be 100s of exceptions and this will also encompass 10s/100s of global alert definitions.
I'm struggling to see how you could use the platform as the primary source of monitoring/alerting.
As said in the comments, you can use the sideload node to achieve that.
Say you want to ensure that your InfluxDB servers are not overloaded. You may want to allow 100 measurements by default. Only on one server, which happens to get a massive number of datapoints, you want to limit it to 10 (a value which is exceeded by the _internal database easily, but good for our example).
Given the following excerpt from a tick script
var data = stream
|from()
.database(db)
.retentionPolicy(rp)
.measurement(measurement)
.groupBy(groupBy)
.where(whereFilter)
|eval(lambda: "numMeasurements")
.as('value')
var customized = data
|sideload()
.source('file:///etc/kapacitor/customizations/demo/')
.order('hosts/host-{{.hostname}}.yaml')
.field('maxNumMeasurements',100)
|log()
var trigger = customized
|alert()
.crit(lambda: "value" > "maxNumMeasurements")
and the name of the server with the exception being influxdb and the file /etc/kapacitor/customizations/demo/hosts/host-influxdb.yaml looking as follows
maxNumMeasurements: 10
A critical alert will be triggered if value and hence numMeasurements will exceed 10 AND the hostname tag equals influxdb OR if value exceeds 100.
There is an example in the documentation handling scheduled downtimes using sideload
Furthermore, I have created an example available on github using docker-compose
Note that there is a caveat with the example: The alert flaps because of a second database dynamically generated. But it should be sufficient to show how to approach the problem.
What is the cost of using sideload nodes in terms of performance and computation if you have over 10 thousand servers?
Managing alerts manually directly in Chronograph/Kapacitor is not feasible for big number of custom alerts.
At AMMP Technologies we need to manage alerts per database, customer, customer_objects. The number can go into the 1000s. We've opted for a custom solution where keep a standard set of template tickscripts (not to be confused with Kapacitor templates), and we provide an interface to the user where only expose relevant variables. After that a service (written in python) combines the values for those variables with a tickscript and using the Kapacitor API deploys (updates, or deletes) the task on the Kapacitor server. This is then automated so that data for new customers/objects is combined with the templates and automatically deployed to Kapacitor.
You obviously need to design your tasks to be specific enough so that they don't overlap and generic enough so that it's not too much work to create tasks for every little thing.

Redis Memory Optimization suggestions

I have a Redis Master and 2 slaves. All 3 are currently on the same unix server. The memory used by the 3 instances is approximately 3.5 G , 3 G , 3G. There are about 275000 keys in the redis db. About 4000 are hashes. 1 Set has 100000 values. 1 List has 275000 keys in it. Its a List of Hashes and Sets. The server has total memory of 16 GB. Currently 9.5 GB is used. The persistence is currently off. The rdb file is written once in a day by forced background save. Please provide any suggestions for optimizations. max-ziplist configuration is default currently.
Optimizing Hashes
First, let's look at the hashes. Two important questions - how many elements in each hash, and what is the largest value in those hashes? A hash uses the memory efficient ziplist representation if the following condition is met:
len(hash) < hash-max-ziplist-entries && length-of-largest-field(hash) < hash-max-ziplist-value
You should increase the two settings in redis.conf based on your data, but don't increase it more than 3-4 times the default.
Optimizing Sets
A set with 100000 cannot be optimized, unless you provide additional details on your use case. Some general strategies though -
Maybe use HyperLogLog - Are you using the set to count unique elements? If the only commands you run are sadd and scard - maybe you should switch to a hyperloglog.
Maybe use Bloom Filter - Are you using the set to check for existence of a member? If the only commands you run are sadd and sismember - maybe you should implement a bloom filter and use it instead of the set.
How big is each element? - Set members should be small. If you are storing big objects, you are perhaps doing something incorrect.
Optimizing Lists
A single list with 275000 seems wrong. It is going to be slow to access elements in the center of the list. Are you sure you list is the right data structure for your use case?
Change list-compress-depth to 1 or higher. Read about this setting in redis.conf - there are tradeoffs. But for a list of 275000 elements, you certainly want to enable compression.
Tools
Use the open source redis-rdb-tools to analyze your data set (disclaimer: I am the author of this tool). It will tell you how much memory each key is taking. It will help you to decide where to concentrate your efforts on.
You can also refer to this memory optimization cheat sheet.
What else?
You have provided very little details on your use case. The best savings come from picking the right data structure for your use case. I'd encourage you to update your question with more details on what you are storing within the hash / list / set.
We did following configuration and that helped to reduce the memory footprint by 40%
list-max-ziplist-entries 2048
list-max-ziplist-value 10000
list-compress-depth 1
set-max-intset-entries 2048
hash-max-ziplist-entries 2048
hash-max-ziplist-value 10000
Also, we increased the RAM on the linux server and that helped us with the Redis memory issues.

How to speed up my neo4j instance

I'm running neo4j version 2.2.5. I love all the CYPHER language, Python integration, ease of use, and very responsive user community.
I've developed a prototype of an application and am encountering some very poor performance times. I've read a lot of links related to performance tuning. I will attempt to outline my entire database here so that someone can provide guidance to me.
My machine is a MacBook Pro, 16GB of RAM, and 500GB SSD. It's very fast for everything else I do in Spark + Python + Hadoop. It's fast for Neo4j too, BUT when I get to like 2-4M nodes then it's insanely slow.
I've used both of these commands to start up neo4j, thinking they will help, and neither is that helpful:
./neo4j-community-2.2.5/bin/neo4j start -Xms512m -Xmx3g -XX:+UseConcMarkSweepGC
./neo4j-community-2.2.5/bin/neo4j start -Xms512m -Xmx3g -XX:+UseG1GC
My neo4j.properties file is as follows:
################################################################
# Neo4j
#
# neo4j.properties - database tuning parameters
#
################################################################
# Enable this to be able to upgrade a store from an older version.
#allow_store_upgrade=true
# The amount of memory to use for mapping the store files, in bytes (or
# kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
# If Neo4j is running on a dedicated server, then it is generally recommended
# to leave about 2-4 gigabytes for the operating system, give the JVM enough
# heap to hold all your transaction state and query context, and then leave the
# rest for the page cache.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 75% of RAM minus the max Java heap size.
dbms.pagecache.memory=6g
# Enable this to specify a parser other than the default one.
#cypher_parser_version=2.0
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true".
#keep_logical_logs=7 days
# Enable shell server so that remote clients can connect via Neo4j shell.
#remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces).
#remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337.
#remote_shell_port=1337
# The type of cache to use for nodes and relationships.
#cache_type=soft
To create my database from a fresh start, I first create these indexes, they are on all of my node types, and edges that I'm using.
CREATE CONSTRAINT ON (id:KnownIDType) ASSERT id.id_type_value IS UNIQUE;
CREATE CONSTRAINT ON (p:PerspectiveKey) ASSERT p.perspective_key IS UNIQUE;
CREATE INDEX ON :KnownIDType(id_type);
CREATE INDEX ON :KnownIDType(id_value);
CREATE INDEX ON :KNOWN_BY(StartDT);
CREATE INDEX ON :KNOWN_BY(EndDT);
CREATE INDEX ON :HAS_PERSPECTIVE(Country);
I have 8,601,880 nodes.
I run this query, and it takes 9 minutes.
MATCH (l:KnownIDType { id_type:'CodeType1' })<-[e1:KNOWN_BY]-(m:KnownIDType { id_type:'CodeType2' })-[e2:KNOWN_BY]->(n:KnownIDType)<-[e3:KNOWN_BY]-(o:KnownIDType { id_type:'CodeType3' })-[e4:KNOWN_BY]->(p:KnownIDType { id_type:'CodeType4' }), (n)-[e5:HAS_PERSPECTIVE]->(q:PerspectiveKey {perspective_key:100})
WHERE 1=1
AND l.id_type IN ['CodeType1']
AND m.id_type IN ['CodeType2']
AND n.id_type IN ['CodeTypeA', 'CodeTypeB', 'CodeTypeC']
AND o.id_type IN ['CodeType3']
AND p.id_type IN ['CodeType4']
AND 20131231 >= e1.StartDT and 20131231 < e1.EndDT
AND 20131231 >= e2.StartDT and 20131231 < e2.EndDT
AND 20131231 >= e3.StartDT and 20131231 < e3.EndDT
AND 20131231 >= e4.StartDT and 20131231 < e4.EndDT
WITH o, o.id_value as KnownIDValue, e5.Country as Country, count(distinct p.id_value) as ACount
WHERE AmbiguousCount > 1
RETURN 20131231 as AsOfDate, 'CodeType' as KnownIDType, 'ACount' as MetricName, count(ACount) as MetricValue
;
I'm looking for more like 15s or less response time. Like I do with < 1M nodes.
What would you suggest? I am happy to provide more information if you tell me what you need.
Thanks a bunch in advance.
Here are a couple of ideas how to speed up your query:
Don't use IN if there is only one element. Use =
With a growing number of nodes, the index lookup will obviously take longer. Instead of having a single label with an indexed property, you could use the id_type property as label. Something like (l:KnownIDTypeCode1)<-[e1:KNOWN_BY]-(m:KnownIDTypeCode2).
Split up the query in two parts. First MATCH your KNOWN_BY path, then collect what you need using WITH and MATCH the HAS_PERSPECTIVE part.
The range queries on the StartDT and EndDT property could be slow. Try to remove them to test if this slows down the query.
Also, it looks like you could replace the >= and < with =, sind you use the same date everywhere.
If you really have to filter date ranges a lot, it might help to implement it in your graph model. One option would be to use Knownby nodes instead of KNOWN_BY relationships and connect them to Date nodes.
First upgrade to version of 2.3, because it should improve performance - http://neo4j.com/release-notes/neo4j-2-3-0/
Hint
It doesn't make sense to use IN for array with one element.
Profile your query with EXPLAIN and PROFILE
http://neo4j.com/docs/stable/how-do-i-profile-a-query.html
Martin, your second recommendation, has sped up my matching paths to single digit seconds, I am grateful for your help. Thank you. While it involved a refactoring the design of my graph, and query patterns, it's improved the performance exponentially. I decided to create CodeType1, CodeType2, CodeType[N] as nodes labels, and minimized the use of node properties, except for keeping the temporality properties on the edges. Thank you again so much! Please let me know if there is anything I can do to help.

Resources