How to identify artifacts which are used multiple times - ibm-doors

I need to see a list of all artifacts in DOORS Next Generation which are used multiple times in the same module. Some users took shortcuts and reused artifacts such as headings and text artifacts that had common text in it, such as "this section was intentionally left blank."
For example, in Module A:
Artifact 12345 says "This section was intentionally left blank."
A user went in to Module A and inserted artifact 12345 every time there was no content for a particular section. So artifact 12345 appears 11 times in Module A.
Why do I need to fix this?
This creates two problems:
1. When Section 1.1.1.1 has some content in it, the user might edit artifact 12345, not knowing that that content is going to be repeated in every other section where 12345 is used.
2. When the file is output to Word or csv, there are multiple parents for artifact 12345 so the output file either omits the extra instances of artifact 12345 or repeats artifact 12345 multiple times under the first parent binding.
In the module view, I have tried using the "Used in Modules" column. This tells me if it is used in more than one module, which can be helpful. But it doesn't tell me how many times it was used or if it was reused in the same module. I can do a hover over to get a pop up that tells me. I'm wondering if there is a way to do a find and have it jump me to the artifact that is reused. For example:
Find
IF Artifact appears >1 in current module THEN locate/stop here.
A report or find function that shows me the artifact that appears in the same module more than one time.

My suggestion is to export a module view that includes id to a spreadsheet, then use Excel to highlight duplicates of id.

Related

How to choose the delimiter for splitting text into columns?

I have a DSV file and each row corresponds to one instance/piece of text as normal. Specifically, I have my own delimiter ### instead of the default (i.e., comma ,). However, when I open the DSV by using Google Sheets, these tools always display the DSV file by splitting automatically by default delimiter. It causes a mess because some columns may contain several commas.
How can I disable the trigger that splits the text into columns automatically and only use my own delimiter?
My DSV is like below:
a###b###c###d,e,f,g###h,",",j
h###i,j,k###l###m###n,"[o,p]"
q,r###s###c,u###v,w###x,y,'z,'
As suggested, I have tried to copy the content directly to a newly created Google sheet, however, I got the error as below due to the size of the content.
There was a problem
Your input contains more than the maximum of 50000 characters in a single cell.
With File > Import > Upload > Drag one can choose one's delimiter (though not, as far as I am aware, as in Excel the option to "Treat consecutive delimiters as one"):
So, depending upon your specific file, you may find the result this way is what you want - provided you are prepared to delete a couple of blank columns for each set of ### (if not choosing a single character rather than ### in the first place).
if you import your CSV data into Google Sheets via copy-paste you can press this combo right after importing it:
LEFT ALT + D
E
ARROW UP
ARROW UP
ENTER
and type in your ###

Determine if a pcollection is empty or not

How to check if a pcollection is empty or not before writing out to a text file in apache beam(2.1.0)?
What i'm trying to do here is to break a file into pcollections of specified number given as a parameter to the pipeline via ValueProvider. As this ValueProvider is not available at pipeline construction time, i declare a decent no 26(total no of alphabets and this is the max no which a user can input) to make it available for .withOuputTags(). So I get 26 tuple tags from which i have to retrieve pcollections before writing to text files. So here, only few number of tags as inputted by user will get populated and rest all are empty. Hence want to ignore empty pcollections returned by some of the tags before i apply TextIO.write().
It seems like actually you want to write a collection into multiple sets of files, where some sets may be empty. The proper way to do this is using the DynamicDestinations API - see TextIO.write().to(DynamicDestinations) which will be available in Beam 2.2.0 which should be cut within the next couple of weeks. Meanwhile if you'd like to use it, you can build a snapshot of Beam at HEAD yourself.

Getting inconsistent tab delimiter width when pasting from Google docs spreadsheet

I am trying to create a gadget for some people, where all they need to do is really copy the contents of a spreadsheet, then paste it in a textbox, which will in turn create a nice table for them to embed in their articles.
I managed to do everything, however Google docs, when copying and pasting data in a text editor, seems to get the size (width) of the tab delimiter wrong between values. So, instead of getting 4 spaces that is the default, i am getting 2 in some cases and so far i managed to find out that the reason is that some of the cells contain strings with spaces. For some reason, this seems to confuse Google docs, thus supplying wrong spacings, which in turn, ruin my script.
I know i can use comma separated values here, but the issue is we are trying to give people the ability to simply copy and paste. Look at the example output below:
School Name Location Type No. eligible pupils
In this example, School Name is one cell, Location is another, Type is another and No. eligible pupils is the last one. It is clear that the first cell does not have the necessary space on the right.
Any ideas? I thought about converting all blank spaces that take more than 1 space to commas, but this might lead to a situation users might actually use 2... which would not work again.
For some reason, it was the code editor that was actually not showing the tabs right. Using a regexp and another code editor (vim) showed that all of them were actual tabs. :)

=HLOOKUP() works well in one situation with small table, but doesn't work in similar one with bigger table, why? (Need help)

Here's an excerpt of my working file, where my problematic situation lies: Google Spreadsheet link
Right now I am balancing heroes and artifacts for my strategy game. I have two separate tables, one for heroes -- "(All) Heroes" sheet, and one for wearable artifacts -- "(All) Artifacts" sheet. Each hero can simultaneously wear 6 different artifacts and there are currently 26 of them in the game right now.
I want to check how heroes' stats would change if they wear different combination of artifacts. In order to do that I need a separate sheet -- "(All) Heroes with Artifacts Calculator", where I can choose whichever hero of any level from my "Heroes sheet" and whichever combination of artifacts from my "Artifacts sheet". I also need a column, which will calculate the final stats of the chosen hero with all the chosen artifacts.
In order to do that I've decided to use data validation for hero/artifact picking (for example - cell '(All) Heroes with Artifacts Calculator!B2') and =HLOOKUP() function for table population according to the picked hero/artifact (for example - cell '(All) Heroes with Artifacts Calculator!B3'). The problem is, that my =HLOOKUP() formula in this case returns #N/A if sorting is set to false and an incorrect value if the sorting is set to true.
Before doing all this, I've created some small test tables for heroes, artifacts and resulting calculations: "Test Heroes", "Test Items" and "Test Calculator". The very same formula that gave #N/A in the first case works great in this one.
I really need to finish that big "(All) Heroes with Artifacts Calculator" table, but I don't know what to do. I've tried using =QUERY() function, but I wasn't able to achieve the desired result.
Please help me out.
Replace your formula =HLOOKUP($B$2,'(All) Heroes'!C:BT,2,false) with =HLOOKUP($B$2,'(All) Heroes'!C$4:BT,2,false)
Basically the first row of range should contain the key on which the lookup is done.
Edit
The above value will work for all attributes that are below the Row 4 in the "(All) Heros" sheet. For Level and Name (en_EN), you can use index and match.
For eg, the formula in cell (All) Heroes with Artifacts Calculator!B3 would be
=index('(All) Heroes'!C2:BT2, match(B$2,'(All) Heroes'!C$4:BT$4,0))
and that in cell (All) Heroes with Artifacts Calculator!B4 would be
=index('(All) Heroes'!C3:BT3, match(B$2,'(All) Heroes'!C$4:BT$4,0))

Automatically updating Data Validation lists based on user input

I have a very large data set (about 16k rows). I have 10 higher level blocks and within each block I have 4 categories (10 rows for each) which use Data Validation lists to show items available in each category. The lists should automatically update based on user input. What I need your help with is that I want to use the same data set for each block and preferably a least calculation/size intensive approach. I have put together a sample file that outlines the issue with examples.
Sample File
Thank you for your help in advance.
Okay, I've found something, but it can be quite time consuming to do.
Select each range of cells. For instance, for the first one, select B3:B18 and right click on the selection. Find 'Name a Range..." and give it the name "_FIN_CNY". Repeat for all the other ranges, changing the name where necessary.
Select the first range of cells to get the data validation, and click on "Data validation", pick the option "Allow: List" (you already have it) and then in the source, put the formula:
=INDIRECT($G$4&"_CNY")
$G$4 is where the user will input. This changes as you change blocks.
_CNY is the category. Change it to _CNY2 for the second category.
Click "OK" and this should be it. Repeat for the other categories.
I have put an updated file on dropbox where you can see I already did it for the data of _FIN for categories CNY, CNY2 and INT and did the one for _GER as well. You'll notice the category of INT for _GER doesn't work, that's because the Named Range _GER_INT doesn't exist yet.

Resources