Deletion of module columns/attribute not visible on a given View - ibm-doors

I have been given the task of creating a DXL script. First problem is that I have never used DXL before, even though I have many years experience with DOORS itself. I have been surfing the Net to seek guidance on my particular problem. I also have a few specimen DXL scripts for reference.
My new client requires that for each View of a given Module, of which there are many Views, new "reduced" Modules are to be produced reflecting each View.
By "reduced", I mean that these new Modules are to contain nothing that isn't actually needed for that View., i.e. Columns, Attributes etc. These new Modules will only have the single View.
So, the way forward as I see it, is to take copies of the single master Module, one for each View, rename those copies to reflect a given Master Module/Required View, select that required View in the given copy Module and then delete everything that is not needed by that View, i.e. available Columns, Attributes etc.
This would be simple if I had the required DXL knowledge, which I am endeavouring to pick up as fast as I can.
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
The client aims to use the script periodically for View archiving (I know, that's the way they want it).
Clarification
Some clarification of what I believe is required, given the following text from my original question:
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
So, say there are ten views of the master Module, outside of the DXL script, I would copy the master Module ten times, renaming each copy to reflect each of the ten views. Unless you know different, each of those ten copies will reflect the same “Absolute Number”s as are in the master Module, so no problem there?
So, starting with the first of the copied Modules, each named to reflect the View it will eventually represent, its View would be set from the ten Views available to it, that which matches its title.
The single generic DXL script would then be run against that first copy Module, the aim being to delete everything not actually needed for that view, i.e. Attributes, Columns etc. Would some kind of purging command be required in the script for any aforementioned deleted items?
The single generic DXL script would then delete ALL views from that copy Module. The log that is produced when running the script also needs capturing, but I’m not sure whether this should be done from within the script, if possible or as a separate manual task outside of the script.
The aforementioned (indented) process would then be repeated, using the same generic script, against the remaining nine copied Modules. The intension is to leave us with ten copy Modules, each one reflecting one of the ten possible Views, with each one containing only the Attributes, Columns etc. required for that View.

Creating a mirror of a module with this approach is not so easy IMO. Think e.g. about "Absolute Number". If the original module contains the numbers 15 (level 1), 2000 (level 2), 1 (level 1), you will have to create 2000 objects, purge 1997 of them and move them to the correct place.
There is a "duplicate" tool at https://www.ibm.com/developerworks/community/forums/html/topic?id=43862118-113d-4eac-b3f1-21d3b73959d1 which tries to do this, but as stated there, this script is said not to work correctly in all situations.
So, I would rather use the approach "string clipCopy (Item i); string clipPaste(Folder folderRef)". Should be faster and less error prone. But: all Out-Links will also be copied with this method, you will probably have to delete these after the copy or else the link target module(s) will have lots of In-Links.
The problem is still not so easy to solve, as every view might have DXL columns that rely on some or other attribute, and it might contain DXL attributes which again might rely on sth else. I doubt that there is a way to analyze DXL code "on the fly" and find out which columns may be deleted.
Perhaps a totally different approach would be feasible: open each view and create an export to Excel, this way you will get rid of any dynamic dependencies. Then re-import the excel sheet to a new DOORS module. You will still have the "Absolute Number" problem, but perhaps you can make a deal that you will have a pseudo attribute "Original Absolute Number" and disregard the "new" "Absolute Number"'
Quite a big task for a DXL beginner....
Update: On second thought, perhaps you might want to combine these approaches
agree with your employer that you will use an alternative attribute for Absolute Number
use a loop like Russel suggested, when creating objects remember that objects might have to be created "below" or "after" its predecessor or sibling
for DXL attributes do not copy the DXL code but the actual current value of the object
for DXL columns create pseudo attributes _ and create a new view that uses these pseudo attributes instead of the original value

Copying the entire module, then deleting everything not in that view, seems worse than just copying the things you need from each particular view.
I would take the following as the outline of your program:
for view in main module do {
for column in view do {
Find attribute for each column and store (possibly in a skip list?)
Store name of column
}
create new module
create needed types / attributes in new module
create new view in new module
for object in main module {
create object in new module
for attribute in main module {
check if attribute is in new module {
copy info from old object to new
}
}
}
}
Each of these for X in y loops should be in the DXL reference manual in some for or another.
If you need more help, let me know!

Related

Is there a way to view / edit the definition of a computed variable after it has been created?

I have created some new computed variables in SPSS. I would like to be able to view the definitions (to check for errors) and possibly edit them after the fact. I cannot find a way to do this or find any advice on the internet.
I can see the definitions in the saved syntax file but there does not seem to be a way to pull the definitions up and view them from the SAV file itself.
Note that this is NOT the same thing as recoding a variable - I want to be able to bring up something like the new variable dialog box for an existing computed variable, view the definition and, if necessary, edit it.
Double-click on the relevant output in your .SAV file.
Select, copy, and paste the COMPUTE statement to a Syntax file.
Edit the COMPUTE statement as desired.
Add the command EXECUTE. after the COMPUTE statement.
Select this block of code.
Click on the 'play' button in the ribbon (or, select Run > Run Selected).
This will recompute the variable.
CAUTION: When you run COMPUTE from syntax, you don't get the warning asking if you want to replace the existing variable.

How to pass an array from Bazel cli to rules?

Let's say I have a rule like this.
foo(
name = "helloworld",
myarray = [
":bar",
"//path/to:qux",
],
)
In this case, myarray is static.
However, I want it to be given by cli, like
bazel run //:helloworld --myarray=":bar,//path/to:qux,:baz,:another"
How is this possible?
Thanks
To get exactly what you're asking for, Bazel would need to support LABEL_LIST in Starlark-defined command line flags, which are documented here:
https://docs.bazel.build/versions/2.1.0/skylark/lib/config.html
and here: https://docs.bazel.build/versions/2.1.0/skylark/config.html
Unfortunately that's not implemented at the moment.
If you don't actually need a list of labels (i.e., to create dependencies between targets), then maybe STRING_LIST will work for you.
If you do need a list of labels, and the different possible values are known, then you can use --define, config_setting(), and select():
https://docs.bazel.build/versions/2.1.0/configurable-attributes.html
The question is, what are you really after. Passing variable, array into the bazel build/run isn't really possible, well not as such and not (mostly) without (very likely unwanted) side effects. Aren't you perhaps really just looking into passing arguments directly to what is being run by the run? I.e. pass it to the executable itself, not bazel?
There are few ways you could sneak stuff in (you'd also in most cases need to come up with a syntax to pass data on CLI and unpack the array in a rule), but many come with relatively substantial price.
You can define your array in a bzl file and load it from where the rule uses it. You can then dump the bzl content rewriting your build/run configuration (also making it obvious, traceable) and load the bits from the rule (only affecting the rule loading and using the variable). E.g, BUILD file:
load(":myarray.bzl", "myarray")
foo(
name = "helloworld",
myarray = myarray,
],
)
And you can then call your build:
$ echo 'myarray=[":bar", "//path/to:qux", ":baz", ":another"]' > myarray.bzl
$ bazel run //:helloworld
Which you can of course put in a single wrapper script. If this really needs to be a bazel array, this one is probably the cleanest way to do that.
--workspace_status_command: you can collection information about your environment, add either or both of the resulting files (depending on whether the inputs are meant to invalidate the rule results or not, you could use volatile or stable status files) as a dependency of your rule and process the incoming file in the what is being executed by the rule (at which point one would wonder why not pass it to as its command line arguments directly). If using stable status file, also each other rule depending on it is invalidated by any change.
You can do similar thing by using --action_env. From within the executable/tool/script underpinning the rule, you can directly access defined environmental variable. However, this also means environment of each rule is affected (not just the one you're targeting); and again, why would it parse the information from environment and not accept arguments on the command line.
There is also --define, but you would not really get direct access it's value as much as you could select() a choice out of possible options.

Modify local variable from another script in Lua

I'm trying to make a mod for the game Don't Starve Together, which makes use of Lua. For this reason, I can't modify their source variables/files.
In order to try to modify the world generation, I need to access a local table that was instantiated in another file (the file is called "levels.lua"). The variable name is "levellist". Is there a way to access the variable so that I can add certain elements to the table?
Namely, I want to add {"task_set", "cave_custom"} to levellist[DST_CAVE].overrides.
If someone could help or even just tell me if this is possible or not, that would be great. Thanks!
What you are trying to do simply doesn't make sense. Local variables are accessible only from the scope it was defined in, and it's nested scopes. There is no, normal, way to change it from different scopes, let alone an entirely different script.
If you want variables that all your scripts use, use globals.
Of course you can't get to local variables (i.e. "pointers") used by another function, save for obscure debug methods that are rarely exposed to user sandbox, but you don't need to. Because you do not want to modify some local variable (i.e. make it point to another table for example), but get to some table and modify value inside it. So you just need to find any place where it is exposed to you in any way.
You should somehow edit in relevant content in your question because it is PITA to Alt-Tab back and forth to your files. According to structure from comments/chat AddLevel(LEVELTYPE.SURVIVAL, ...) inserts an entry into levellist[LEVELTYPE.SURVIVAL] table. If you check levels.lua you can also see that it returns table with sandbox_levels assigned exactly to this.
So:
local levels = require "levels"
print(levels.sandbox_levels)
-- Will print "table: SOMENUMBERS" - i.e. address of levellist[LEVELTYPE.SURVIVAL]
You now can iterate it with for idx = 1, #levels.sandbox_levels or ipairs and find entry belonging to "DST_CAVE". I can't tell how field with ID will be called or how it will be structured because data is preprocessed with function Level before inserting that you did not include in the files you posted.
As others have suggested, this may not be your best strategy.
But depending on your environment, it may be possible to abuse some more esoteric features of the runtime to let you indirectly modify values that aren't "yours". Have a look at debug.sethook and setfenv.

Processing multiline events from a text file in Dataflow

I am attempting to build a dataflow pipeline to process a text file which contains events that span multiple lines. The dataflow SDK TextIO class assumes each line is a new event.
My plan is to create a new TextReader and register it with the DataPipelineRunner. This new reader will know how to aggregate the multiple lines into a single line.
I am pretty sure that this approach will work but I am wondering if this is the right way to do it or if there is a simpler solution?
The text I am trying to parse is:
==============> len:45 pktype:4 mtype:2
SYMBOL: USOCSTIA151632.00
OPEN_INT: 212
PR_OPEN_INTEREST: 212
TIME_STAMP: 04/10/2015 06:30:17:420 val:1428661817
The result should be the last 4 lines concatenated together and the first line dropped.
Best regards,
Peter
Note that TextReader is an internal implementation detail class, so subclassing it would be highly discouraged and challenging to do properly.
The recommended way to define a new file-based format like yours is to subclass FileBasedSource using the user-defined source API.
In your case, I would recommend to base your class on the LineIO example from documentation, and wrap the LineReader defined there into your own class which would use LineReader as a helper for reading individual lines, but:
In startReading() it would skip until the line starting with "====>"
In readNextRecord() it would read lines until the next "====>" and bundle them into a single record.
Please make sure to carefully read the documentation to FileBasedSource and FileBasedReader: the parallelization mechanism relies on the consistency properties described there, which your format has to satisfy, for ensuring that records are not duplicated or omitted on the boundaries between adjacent processing shards. XmlSource tests are a good example of how to unit-test these properties.
Please tell us how it goes and report back with any problems or questions - we are very interested in feedback on this API.

How can I dynamically include modules in nested directories?

I want to dynamically load code by traversing a directory structure and dynamically load whatever modules I find there.
The purpose for doing so is to run a series of validations. If a top-level validation fails, any child validations will not be run.
My thinking was that a controller object could scan the directories, build up a hierarchy of modules and then make the decisions on whether or not to traverse a particular part of the tree based on the success/failure of higher-level validations.
For example, I might have a series of validations I want to run against a regex, however, none of the validations should be run if the regex doesn't exist or is empty. In this case, the top level directory would contain just the exists validation, and a child directory would contain all the other validations to be run if the regex exists.
Being able to define these validations in separate files and create the needed hierarchy would be extremely useful for ease of adding additional validations later, rather than having to crack open an existing class and add methods.
Is there a way an application can dynamically scan a directory, save the filenames in a collection and then use the elements of that collection in a require? I don't think so. What about a load?
Is there any way to achieve such a design? Or am I thinking about it all wrong and should think of some other methodology instead?
Your request is very doable, but no language will do it for you automatically. You have to write the code to dive into the directories, determine the existence of the tests and then decide whether you should drill down further.
Ruby will help you though. There is the Find module, which is included in the standard library. This is from its docs:
The Find module supports the top-down traversal of a set of file paths.
For example, to total the size of all files under your home directory,
ignoring anything in a "dot" directory (e.g. $HOME/.ssh):
require 'find'
total_size = 0
Find.find(ENV["HOME"]) do |path|
if FileTest.directory?(path)
if File.basename(path)[0] == ?.
Find.prune # Don't look any further into this directory.
else
next
end
else
total_size += FileTest.size(path)
end
end
From that code you would look for the signatures of the files and embedded folders, to decide if you should drill down further. For each file found that is one you want, use require to load it.
You can find other examples out on the "internets" showing how people use Find. Also the Dir module has similar functionality using glob, only you have to tell it where to descend, and then can iterate over the returned results.

Resources