How to find Flexlm liscense Feature usage over time? - flexlm

I have developed a script that screens out the "IN:" and "OUT:" for a particular feature as below, from debug.log file:
MODULE IN_OUT TIMESTAMP VAR USAGE
PROE_Flex3C OUT 4/8/2017 19:27 1 1
PROE_Flex3C OUT 4/8/2017 19:27 1 2
PROE_Flex3C OUT 4/8/2017 19:27 1 3
PROE_Flex3C OUT 4/8/2017 19:27 1 4
PROE_Flex3C OUT 4/9/2017 2:39 1 5
PROE_Flex3C IN 4/9/2017 2:42 -1 4
PROE_Flex3C OUT 4/9/2017 5:45 1 5
PROE_Flex3C OUT 4/9/2017 5:46 1 6
PROE_Flex3C IN 4/9/2017 5:50 -1 5
PROE_Flex3C IN 4/9/2017 5:53 -1 4
For every OUT (liscense Checked out) the script puts +1 and for every checkin the scripts subtracts -1 in VAR field & displays the count under USAGE.
My intend is to plot the usage graph of feature (e.g PROE_Flex3C ) over TIME (Time stamp). When i use the method descried i find that the peak usage goes beyond the total liscense features available. For. .e.g if the total license available for PROE_Flex3C is 34 then the graph shows max license utilization as 40.
FLEXnet Licensing (v10.8.6.2 build 59284 x64_n6). How to make the peak liscense usage count accurate ? What could be missing ?
The problem is the the peak usage counted by such method exceeds the total available license. It looks like the server is leaking liscense?

A solution from an acquaintance.
The number of pair 'OUT/IN' could be not connected to the number of tokens for the increment/feature.
For Matlab, if a user has a Network Named User (NNU) token, from the same computer he could launch many instances of Matlab. Each instance puts an 'OUT' in the logfile, but then number of tokens is unchanged for a NNU token with Matlab. To avoid your situation, the solution is to check the pair 'login/OUT' before change the number of token used.
The 'OUT/IN' and the number of used tokens is depending of the editor of your software and the type of license used.
[Update 09/18/2017]
The solution is fully dependent from the editor and your license model. In my situation, your script will give some bad results.
In my example, I use Matlab.
The steps :
from the host 'hal9000', I launch a first Matlab session;
from the host 'hal9000', I launch a second sessions.
My license file contains more than one license, there is CN and NNU licences, this the reason of the 'DENIED'. The traces in the logfile are the following :
15:10:58 (MLM) DENIED: "MATLAB" my_login#hal9000 (User/host on EXCLUDE list for feature. (-38,348))
15:10:58 (MLM) DENIED: "MATLAB" my_login#hal9000 (User/host on EXCLUDE list for feature. (-38,348))
15:10:58 (MLM) DENIED: "MATLAB" my_login#hal9000 (User/host on EXCLUDE list for feature. (-38,348))
15:10:58 (MLM) OUT: "MATLAB" my_login#hal9000
15:12:16 (MLM) OUT: "MATLAB" my_login#hal9000
15:13:06 (MLM) IN: "MATLAB" my_login#hal9000
15:40:35 (MLM) IN: "MATLAB" my_login#hal9000
In my case, with a NNU token affected to 'my_login' for the increment MATLAB, the both operations 'OUT' (15:10:58 and 15:12:16) are logged in the logfile but only one token is removed from the number of available tokens for the increment MATLAB.
If the number of tokens before the first 'OUT' was 6, after the second 'OUT' the number will be 5 and not 4, because I use the same host 'hal9000'. After the first 'OUT', all 'OUT' associated with 'my_login#hal9000' are logged but do not change the number of tokens used by 'my_login' on the host 'hal9000'.
I have the same situation for Comsol, where all 'OUT' made by 'my_login#hal9000' are written in the logfile but the number of tokens available is decremented only by 1 token for all 'OUT' operations.
In you script, it could be necessary to check the association 'login#host' to know if you must, or not, change the number of token used, in concordance with your model of license.

Related

R - how to exclude pennystocks from environment before calculating adjusted stock returns

Within my current research I'm trying to find out, how big the impact of ad-hoc sentiment on daily stock returns is.
Calculations functioned quite well and results also are plausible.
The calculations until now with quantmod package and yahoo financial data look like below:
getSymbols(c("^CDAXX",Symbols) , env = myenviron, src = "yahoo",
from = as.Date("2007-01-02"), to = as.Date("2016-12-30")
Returns <- eapply(myenviron, function(s) ROC(Ad(s), type="discrete"))
ReturnsDF <- as.data.table(do.call(merge.xts, Returns))
# adjust column names
colnames(ReturnsDF) <- gsub(".Adjusted","",colnames(ReturnsDF))
ReturnsDF <- as.data.table(ReturnsDF)
However, to make it more robust towards noisy influence of pennystock data I wonder, how its possible to exclude stocks that once in the time period go below a certain value x, let's say 1€.
I guess, the best thing would be to exclude them before calculating the returns and merge the xts object results or even better, before downloading them with the getSymbols command.
Has anybody an idea how this could work best? Thanks in advance.
Try this:
build a price frame of the Adj. closing prices of your symbols
(I use the PF function of the quantmod add-on package qmao which has lots of other useful functions for this type of analysis. (install.packages("qmao", repos="http://R-Forge.R-project.org”))
check by column if any price is below your minimum trigger price
select only columns which have no closings below the trigger price
To stay more flexible I would suggest to take a sub period - let’s say no price below 5 during the last 21 trading days.The toy example below may illustrate my point.
I use AAPL, FB and MSFT as the symbol universe.
> symbols <- c('AAPL','MSFT','FB')
> getSymbols(symbols, from='2018-02-01')
[1] "AAPL" "MSFT" "FB"
> prices <- PF(symbols, silent = TRUE)
> prices
AAPL MSFT FB
2018-02-01 167.0987 93.81929 193.09
2018-02-02 159.8483 91.35088 190.28
2018-02-05 155.8546 87.58855 181.26
2018-02-06 162.3680 90.90299 185.31
2018-02-07 158.8922 89.19102 180.18
2018-02-08 154.5200 84.61253 171.58
2018-02-09 156.4100 87.76771 176.11
2018-02-12 162.7100 88.71327 176.41
2018-02-13 164.3400 89.41000 173.15
2018-02-14 167.3700 90.81000 179.52
2018-02-15 172.9900 92.66000 179.96
2018-02-16 172.4300 92.00000 177.36
2018-02-20 171.8500 92.72000 176.01
2018-02-21 171.0700 91.49000 177.91
2018-02-22 172.5000 91.73000 178.99
2018-02-23 175.5000 94.06000 183.29
2018-02-26 178.9700 95.42000 184.93
2018-02-27 178.3900 94.20000 181.46
2018-02-28 178.1200 93.77000 178.32
2018-03-01 175.0000 92.85000 175.94
2018-03-02 176.2100 93.05000 176.62
Let’s assume you would like any instrument which traded below 175.40 during the last 6 trading days to be excluded from your analysis :-) .
As you can see that shall exclude AAPL and FB.
apply and the base function any applied(!) to a 6-day subset of prices will give us exactly what we want. Showing the last 3 days of prices excluding the instruments which did not meet our condition:
> tail(prices[,apply(tail(prices),2, function(x) any(x < 175.4)) == FALSE],3)
FB
2018-02-28 178.32
2018-03-01 175.94
2018-03-02 176.62

Dataflow - Approx Unique on unbounded source

I'm getting unexpected results streaming in the cloud.
My pipeline looks like:
SlidingWindow(60min).every(1min)
.triggering(Repeatedly.forever(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(30)))
)
)
.withAllowedLateness(15sec)
.accumulatingFiredPanes()
.apply("Get UniqueCounts", ApproximateUnique.perKey(.05))
.apply("Window hack filter", ParDo(
if(window.maxTimestamp.isBeforeNow())
c.output(element)
)
)
.toJSON()
.toPubSub()
If that filter isn't there, I get 60 windows per output. Apparently because the pubsub sink isn't window aware.
So in the examples below, if each time period is a minute, I'd expect to see the unique count grow until 60 minutes when the sliding window closes.
Using DirectRunner, I get expected results:
t1: 5
t2: 10
t3: 15
...
tx: growing unique count
In dataflow, I get weird results:
t1: 5
t2: 10
t3: 0
t4: 0
t5: 2
t6: 0
...
tx: wrong unique count
However, if my unbounded source has older data, I'll get normal looking results until it catches up at which point I'll get the wrong results.
I was thinking it had to do with my window filter, but removing that didn't change the results.
If I do a Distinct() then Count().perKey(), it works, but that slows my pipeline considerably.
What am I overlooking?
[Update from the comments]
ApproximateUnique inadvertently resets its accumulated value when result is extracted. This is incorrect when the value is read more than once as with windows firing multiple times. Fix (will be in version 2.4): https://github.com/apache/beam/pull/4688

insufficient memory when using proc assoc in SAS

I'm trying to run the following and I receive an error saying that ERROR: The SAS System stopped processing this step because of insufficient memory.
The dataset has about 1170(row)*90(column) records. What are my alternatives here?
The error infor. is below:
332 proc assoc data=want1 dmdbcat=dbcat pctsup=0.5 out=frequentItems;
333 id tid;
334 target item_new;
335 run;
----- Potential 1 item sets = 188 -----
Counting items, records read: 19082
Number of customers: 203
Support level for item sets: 1
Maximum count for a set: 136
Sets meeting support level: 188
Megs of memory used: 0.51
----- Potential 2 item sets = 17578 -----
Counting items, records read: 19082
Maximum count for a set: 119
Sets meeting support level: 17484
Megs of memory used: 1.54
----- Potential 3 item sets = 1072352 -----
Counting items, records read: 19082
Maximum count for a set: 111
Sets meeting support level: 1072016
Megs of memory used: 70.14
Error: Out of memory. Memory used=2111.5 meg.
Item Set 4 is null.
ERROR: The SAS System stopped processing this step because of insufficient memory.
WARNING: The data set WORK.FREQUENTITEMS may be incomplete. When this step was stopped there were
1089689 observations and 8 variables.
From the documentation (http://support.sas.com/documentation/onlinedoc/miner/em43/assoc.pdf):
Caution: The theoretical potential number of item sets can grow very
quickly. For example, with 50 different items, you have 1225 potential
2-item sets and 19,600 3-item sets. With 5,000 items, you have over 12
million of the 2-item sets, and a correspondingly large number of
3-item sets.
Processing an extremely large number of sets could cause your system
to run out of disk and/or memory resources. However, by using a higher
support level, you can reduce the item sets to a more manageable
number.
So - provide a support= option make sure it's sufficiently high, e.g.:
proc assoc data=want1 dmdbcat=dbcat pctsup=0.5 out=frequentItems support=20;
id tid;
target item_new;
run;
Is there a way to frame the data mining task so that it requires less memory for storage or operations? In other words, do you need all 90 columns or can you eliminate some? Is there some clear division within the data set such that PROC ASSOC wouldn't be expected to use those rows for its findings?
You may very well be up against software memory allocation limits here.

erlang crash dump no more index entries

i have a problem about erlang.
One of my Erlang node crashes, and generates erl_crash.dump with reason no more index entries in atom_tab (max=1048576).
i checked the dump file and i found that there are a lot of atoms in the form of 'B\2209\000..., (about 1000000 entries)
=proc:<0.11744.7038>
State: Waiting
Name: 'B\2209\000d\022D.
Spawned as: proc_lib:init_p/5
Spawned by: <0.5032.0>
Started: Sun Feb 23 05:23:27 2014
Message queue length: 0
Number of heap fragments: 0
Heap fragment data: 0
Reductions: 1992
Stack+heap: 1597
OldHeap: 1597
Heap unused: 918
OldHeap unused: 376
Program counter: 0x0000000001eb7700 (gen_fsm:loop/7 + 140)
CP: 0x0000000000000000 (invalid)
arity = 0
do you have some experience about what they are?
Atoms
By default, the maximum number of atoms is 1048576. This limit can be raised or lowered using the +t option.
Note: an atom refers into an atom table which also consumes memory. The atom text is stored once for each unique atom in this table. The atom table is not garbage-collected.
I think that you produce a lot of atom in your program, the number of atom reach the number limition for atom.
You can use this +t option to change the number limition of atom in your erlang VM when your start your erlang node.
So it tells you, that you generate atoms. There is somewhere list_to_atom/1 which is called with variable argument. Because you have process with this sort of name, you register/2 processes with this name. It is may be your code or some third party module what you use. It's bad behavior. Don't do that and don't use modules which is doing it.
To be honest, I can imagine design where I would do it intentionally but it is very special case and it is obviously not the case when you ask this question.

How to evaluate a search/retrieval engine using trec_eval?

Is there any body who has used TREC_EVAL? I need a "Trec_EVAL for dummies".
I'm trying to evaluate a few search engines to compare parameters like Recall-Precision, ranking quality, etc for my thesis work. I can not find how to use TREC_EVAL to send queries to the search engine and get a result file which can be used with TREC_EVAL.
Basically, for trec_eval you need a (human generated) ground truth. That has to be in a special format:
query-number 0 document-id relevance
Given a collection like 101Categories (wikipedia entry) that would be something like
Q1046 0 PNGImages/dolphin/image_0041.png 0
Q1046 0 PNGImages/airplanes/image_0671.png 128
Q1046 0 PNGImages/crab/image_0048.png 0
The query-number identifies therefore a query (e.g. a picture from a certain category to find similiar ones). The results from your search engine has then to be transformed to look like
query-number Q0 document-id rank score Exp
or in reality
Q1046 0 PNGImages/airplanes/image_0671.png 1 1 srfiletop10
Q1046 0 PNGImages/airplanes/image_0489.png 2 0.974935 srfiletop10
Q1046 0 PNGImages/airplanes/image_0686.png 3 0.974023 srfiletop10
as described here. You might have to adjust the path names for the "document-id". Then you can calculate the standard metrics trec_eval groundtrouth.qrel results.
trec_eval --help should give you some ideas to choose the right parameters for using the measurements needed for your thesis.
trec_eval does not send any queries, you have to prepare them yourself. trec_eval does only the analysis given a ground trouth and your results.
Some basic information can be found here and here.

Resources