Passing output of a TestStep to another TestStep - opentap-proj

In the Keysight OpenTap
There is Test- Step 1. I would like the pass the result obtained by Test-Step 1 to any of the Test-Step-n
Example : Test-Step1 executes SCPI Query and then the result obtained from this, has to be passed to Test-Step-N.
For this approach, we have extended a Test-Step and created our own Test Step.
Is there any in-built feature In TAP we could make use of it?

Are you asking for this feature to be provided by the built-in 'Basic/Flow Steps'?
Input/Output parameters are fully supported by the OpenTAP engine and as you point out, you have created your own TestStep which I assume has utilised this approach.
Any generic Teststep that would consume the output of another TestStep would need to understand the data to make any use of it other than in the simplest of cases.
Do you have a specific example?

Related

How do I use fmi2SetReal in C++ to set the value of an input parameter to simulate from an FMU model?

I am trying to use the fmi4cpp API in C++ to run a simulation from a FMU and I would like to set some input parameters. What's the best way to do that for (1) fixed parameters and (2) continuous inputs? I bumped into the fmi2SetReal function but I struggle finding practical examples.
Thank you very much.
Using fmi4cpp, the call to fmi2SetReal is handled through the write_real functions in your slave instance. Location in source

Use of extension functions in streaming mode in Saxon/Net

I would like to know whether it is possible to call user extension functions in streaming mode using XSLT 3.0 on Saxon EE for .Net.
And if it is, under what constraints (e.g. "never pass nodes as parameters but atomic values are OK", etc.)
I have looked at the main documentation but could not find anything.
I don't think it's been tested, so I suggest you try it and see. The streamability analysis uses the general streamability rules, so if you pass atomic values (or unstreamed nodes) then it should be OK. You might be able to get away with passing nodes as well, provided you don't try doing any downwards navigation from them. If you pass a sequence of nodes rather than a single node then they may get buffered.

Save a value into a variable and then use it

I need to calculate the percentile 85 and then save in a variable because I want to use it in many condition sentences like:
IF(variable>percentile85) a=0.
IF(variable2>percentile85) b=0.
IF(variable3>percentile85) c=0.
Is there a way to save a value into a variable and then use it?
use RANK command with Ntiles, then use the new variable created:
RANK VARIABLES=YourVar(A) /NTILES(100).
IF(NYourVar>=85) a=0.
EXECUTE.
Note that you can actually use Python in SPSS. The downloadable Programming and Data Management book has many examples. The SPSSINC TRANS extension command makes it particularly easy to do data transformations using Python code.

optimizing dask Series filtering - lazy version of Series.isin()

I currently have the following pattern embedded inside a larger computation
seq1.isin(seq2[seq3].unique().compute().values)
where seq3 is a boolean Series.
The performance seems acceptable, but it is ugly and the use of compute() forces evaluation, possibly removing opportunities for parallelism.
Simply saying
seq1.isin(seq2[seq3].unique())
does not work and the documentation says that the argument to isin must be an (I presume Numpy) array.
Is there a bettern way to write the above code?
What if seq1 and seq2 are the same?
I don't think it's possible do an incremental set membership operation. To get a correct result, you'd need to have a fully realized set to answer the question of whether an item is a member of it or not.
You could probably achieve this operation using an inner join.

Erlang: Compute data structure literal (constant) at compile time?

This may be a naive question, and I suspect the answer is "yes," but I had no luck searching here and elsewhere on terms like "erlang compiler optimization constants" etc.
At any rate, can (will) the erlang compiler create a data structure that is constant or literal at compile time, and use that instead of creating code that creates the data structure over and over again? I will provide a simple toy example.
test() -> sets:from_list([usd, eur, yen, nzd, peso]).
Can (will) the compiler simply stick the set there at the output of the function instead of computing it every time?
The reason I ask is, I want to have a lookup table in a program I'm developing. The table is just constants that can be calculated (at least theoretically) at compile time. I'd like to just compute the table once, and not have to compute it every time. I know I could do this in other ways, such as compute the thing and store it in the process dictionary for instance (or perhaps an ets or mnesia table). But I always start simple, and to me the simplest solution is to do it like the toy example above, if the compiler optimizes it.
If that doesn't work, is there some other way to achieve what I want? (I guess I could look into parse transforms if they would work for this, but that's getting more complicated than I would like?)
THIS JUST IN. I used compile:file/2 with an 'S' option to produce the following. I'm no erlang assembly expert, but it looks like the optimization isn't performed:
{function, test, 0, 5}.
{label,4}.
{func_info,{atom,exchange},{atom,test},0}.
{label,5}.
{move,{literal,[usd,eur,yen,nzd,peso]},{x,0}}.
{call_ext_only,1,{extfunc,sets,from_list,1}}.
No, erlang compiler doesn't perform partial evaluation of calls to external modules which set is. You can use ct_expand module of famous parse_trans to achieve this effect.
providing that set is not native datatype for erlang, and (as matter of fact) it's just a library, written in erlang, I don't think it's feasibly for compiler to create sets at compile time.
As you could see, sets are not optimized in erlang (as any other library written in erlang).
The way of solving your problem is to compute the set once and pass it as a parameter to the functions or to use ETS/Mnesia.

Resources