I want to know Complete project line of code in TFS project collection - tfs

Team,
Could you please help on the advise.
I want to know how to calculate the line of code for my TFS project collection. I need for entire instance to calculate the line of code.
Please advise. Thank you

Note: I'm assuming you're using TFVC, not Git.
You should be able to get this from the data warehouse (Tfs_Warehouse) assuming you have Reporting Services configured.
There is a Code Churn table. I believe you should be able to sum the NetLinesAdded field to get the total number of lines of code.
The Analysis Cube has a Total Lines field, as well.
However, you can also get this information from your file system with PowerShell, for example:
(gci -Path 'C:\Users\Daniel\Source\Repos\' -rec -Include '*.cs' | select-string .).Count
This comes with the caveat that "lines of code" is, in almost every single case, a totally meaningless, worthless number.

Related

Best Grasshopper plugin to analyse floor plans

I'm trying to figure out the best way to analyse a grasshopper/rhino floor plan. I am trying to create a room map to determine how many doors it takes to reach an exit in a residential building. The inputs are the room curves, names and doors.
I have tried to use space syntax or SYNTACTIC, but some of the components are missing. Alot of the plugins I have been looking at are good at creating floor plans but not analysing them.
Your help would be greaty appreciated :)
You could create some sort of spine that goes through the rooms that passes only through doors, and do some path finding across the topology counting how many "hops" you need to reach the exit.
So one way to get the topology is to create a data structure (a tuple, keyValuePair) that holds the curve (room) and a point (the door), now loop each room to each other and see if the point/door of each of the rooms is closer than some threshold, if it is, store the relationship as a graph (in the abstract sense you don't really need to make lines out of it, but if you plan to use other plugins for path-finding, this can be useful), then run some path-finding (Dijkstra's, A*, etc...) to find the shortest distance.
As for SYNTACTIC: If copying the GHA after unblocking from the installation path to the special components folder (or pointing the folder from _GrasshopperDeveloperSettings) doesn't work, tick the Memory load *.GHA assemblies using COFF byte arrays option of the _GrasshopperDeveloperSettings.
*Note that SYNTACTIC won't give you any automatic topology.
If you need some pseudo-code just write a comment and I'd be happy to help.

Is there a way to locate a segment line in an 837?

I'm having some issues isolating errors in my 837. The system that's interpreting my 837 is giving me a segment where the error is found, but since I have so many claims (and therefore segments), I can't just count the segments until I get to the one I need.
Is there some way of finding a specific segment line? I know the general area the segment line is in (based on the an account number the error is listed under), but I have no way of knowing which of the segments has the errors.
Here's an example of what I mean. There's revenue codes listed after the SV2, then a corresponding code, then the cost of that code.
SV2*0450*HC:96368*100.00*UN*1~
DTP*472*D8*20171204~
LX*13~
SV2*0450*HC:96371*700.00*UN*5~
DTP*472*D8*20171204~
LX*14~
SV2*0450*HC:96372*50.00*UN*1~
DTP*472*D8*20171204~
LX*15~
Thanks.
Please take a look at X12 Parser
loop.getLoop("2400", 0).getSegment("SV1").getElementValue("SV101")
can get you the value needed.
For more examples look at X12ReaderTest

Talend- Memory issues. Working with big files

Before admins start to eating me alive, I would like to say to my defense that I cannot comment in the original publications, because I do not have the power, therefore, I have to ask about this again.
I have issues running a job in talend (Open Studio for BIG DATA!). I have an archive of 3 gb. I do not consider that this is too much since I have a computer that has 32 GB in RAM.
While trying to run my job, first I got an error related to heap memory issue, then it changed for a garbage collector error, and now It doesn't even give me an error. (just do nothing and then stops)
I found this SOLUTIONS and:
a) Talend performance
#Kailash commented that parallel is only on the condition that I have to be subscribed to one of the Talend Platform solutions. My comment/question: So there is no other similar option to parallelize a job with a 3Gb archive size?
b) Talend 10 GB input and lookup out of memory error
#54l3d mentioned that its an option to split the lookup file into manageable chunks (may be 500M), then perform the join in many stages for each chunk. My comment/cry for help/question: how can I do that, I do not know how to split the look up, can someone explain this to me a little bit more graphical
c) How to push a big file data in talend?
just to mention that I also went through the "c" but I don't have any comment about it.
The job I am performing (thanks to #iMezouar) looks like this:
1) I have an inputFile MySQLInput coming from a DB in MySQL (3GB)
2) I used the tFirstRows to make it easier for the process (not working)
3) I used the tSplitRow to transform the data form many simmilar columns to only one column.
4) MySQLOutput
enter image description here
Thanks again for reading me and double thanks for answering.
From what I understand, your query returns a lot of data (3GB), and that is causing an error in your job. I suggest the following :
1. Filter data on the database side : replace tSampleRow by a WHERE clause in your tMysqlInput component in order to retrieve fewer rows in Talend.
2. MySQL jdbc driver by default retrieves all data into memory, so you need to use the stream option in tMysqlInput's advanced settings in order to stream rows.

TFS Cube - Total Lines of Code appears incorrect?

I'm using the TFS cube as documented here and am getting a curious result for 'total lines'. If I look at a file inside of visual studio, I see that a file is perhaps 42 lines long (total, comments, whitespace, and all). However, when I ask the TFS cube for that same information, it tells me that the file is almost - but not exactly - twice its size.
I have my pivot table set up as follows:
Report Filter includes a specific team project, and is filtered on file extension (.cs)
Row labels set to Filename.Parent_ID
Values set to 'Total Lines'
I've looked at the MSDN guidance here and can't see what I've done wrong, other than noting that I have not selected an individual build (if i do so, I get no results).
Edit: As it may be relevant, I'm using TFS 2008 SP1 with SQL 2005 standard. There is a note on the MSDN page which cautions me that SQL 2005 Standard does not support perspectives, and 'the cube elements from all perspectives reside in the team system data cube'. I'm not sure what that means for this problem, if anything.
Check your linebreaks in the files : does numbers change if you convert files between windows/linux line endings?
Please add lines with 60, 90, 150, 200 characters and check how many added lines will be reported. Might be some work-wrapping.

How to combining two files and creating a report with matched fields in COBOL

I have two files :
first file contains jobname and start time which looks like below:
ZPUDA13V STARTED - TIME=00.13.30
ZPUDM00V STARTED - TIME=03.26.54
ZPUDM01V STARTED - TIME=03.26.54
ZPUDM02V STARTED - TIME=03.26.54
ZPUDM03V STARTED - TIME=03.26.56
and the second file contains jobname and Endtime which looks like below:
ZPUDA13V ENDED - TIME=00.13.37
ZPUDM00V ENDED - TIME=03.27.38
ZPUDM01V ENDED - TIME=03.27.34
ZPUDM02V ENDED - TIME=03.27.29
ZPUDM03V ENDED - TIME=03.27.27
Now I am trying to combine these two files to get the report like JOBNAME START TIME ENDTIME.I have used ICETOOL to get the report If I get JOBNAME START TIME ,ENDTIME is SPACES .If I get Endtime ,JOBNAME START TIME gets spaces.
Please let me know how to code the outrec fields as I have coded with almost all possibilites to get the desired one.But still my output is not the same as I required
I have no idea what ICETOOL is (nor the inclination to even look it up in Google :-) but this is a classic COBOL data processing task.
Based on your simple data input, the algorithm would be:
for every record S in startfile:
for every record E in endfile:
if S.jobnname = E.jobname:
ouput S.jobname S.time E.time
next S
endif
endfor
endfor
However, you may need to take into account the fact that:
multiple jobs of the same name may run during the day (multiple entries in the file).
multiple jobs of the same name may run at the same time.
You could get around the first problem by ensuring the E record was the one immediately following the S record (based on time). The second problem is a doozy.
If you're running on z/OS (and you probably are, given the job names), have you considered using information from the SMF records to do this collection and analysis. I'm pretty certain SMF type 30 records hold everything you need.
And assuming this is a mainframe question, here's a shameless plug for a book one of my friends at work has written, check out What On Earth is a Mainframe? by David Stephens (ISBN-13 = 978-1409225355).
I know, i'm toooo late with my resolution, but may be helpful for new comers to stackoverflow
You can make use of JOINKEYS of DFSORT using JCL.
JOINKEYS F1 FIELDS=(01,08,CH,A)
JOINKEYS F2 FIELDS=(01,08,CH,A)
REFORMAT FIELDS=(F1:01,33,F2:25,08)
SORT FIELDS=COPY
OUTREC FIELDS=(01,08,25,08,34,08)
the outrec will hold the data as you need!

Resources