SSIS - Vendor supplies 12 xml tables, we only need 3 how to remove or ignore extra tables - ssis-2012

I have found this to be a bug, however, I need to remove the warnings
[SSIS.Pipeline] Warning: The output column "x" on output "y" and component "z" is not subsequently used in the Data Flow task.
How do I accomplish this ?
Thanks

The vendor supplied .xsd contains definitions for 12 xml tables.
We only use 3 but SSIS is complaining with warning message:
[SSIS.Pipeline] Warning: The output column....
Most of what I've seen on web search, say to direct these input streams to a union-all task, haven't seen a good exxample of this, so looking for other
methods.
Thanks

You may need to open the XMLSource component and, on the columns tab, uncheck the columns that you are not using later on the datafow; this will clear the warning messages.

Related

Reading COBOL code with .NET to generate a call graph

I am working on a project to automate COBOL to generate a class diagram. I am developing using a .NET console application. I need help tracking down the procedure name where the perform statement in used in the below example.
**Z-POST-COPYRIGHT.
move 0 to RETURN-CODE
perform Z-WRITE-FILE**
How do I track the procedure name 'Z-Post-COPYRIGHT' where the procedure 'Z-write-file' is called? The only idea I could think of in terms of COBOL is through indentation as the procedure names are always indented. Ideally in the database, the code should track the procedure name after the word 'perform' and procedure under which it is called (in this case it is Z-POST-COPYRIGHT).
I assume you want to do this "on your own" without external tools (a faster approach can be found at the end).
You first have to "know" your source:
which compiler was it compiled with (get a manual for this compiler)
which options were used
Then you have to preparse the source:
include copybooks (doing the given REPLACING rules if any)
if the source is in free-form reference format: concatenate contents of last line and current line if you find a - in column 7
check for REPLACE and change the result accordingly
remove all comments (maybe only * and \ in column 7 in fixed-form reference format or similar (extensions like "variable" format / "terminal" format", ... exist, maybe only inline comments - when in free-form reference-format, otherwise maybe inline comments *> or compiler specific extensions like |) - depending on the further re-engineering you want to do it could be a good idea to extract them and store them at least with a line number reference
The you finally can track the procedure name with the following rule:
go backwards to the last separator period (there are more rules but the rule "at least one line break, another period, a space a comma or a semicolon" [I've never seen the last two in real code but it is possible" should be enough)
check if there is only one word between this separator period and the next
if this word is no reserved COBOL word (this depends on your compiler) it is very likely a procedure name
Start from here and check the output, then fine grade the rule with actual false positives or missing entries.
If you want to do more than only extract the procedure-names for PERFORM and GO TO (you should at least check the sources for PERFROM ... THRU) then this can get to a lot of work...
Faster approach with external tools:
run a COBOL compiler on the complete sources and tell it to do the preparsing only - this way you have the big second point solved already
if you have the option: tell the compiler or an external tool to create a symbol table / cross reference - this will tell you in which line a procedure is and its name (you can simply find the correct procedure by comparing the line)
Just a note: You may want to check GnuCOBOL (formerly OpenCOBOL) for the preparsing and/or generation of symbol tables/cross-reference and/or printcbl for a completely external tool doing preparsing and/or cobxref for a complete cross reference generation.

Error Adding Variables in SPSS

I am using the Data > Merge Files > Add Variables in SPSS. The two .sav files both contain a variable called "Student_No" which is numeric with the same width in each file. I am using this as the key variable in which to match cases. I am not indicating that cases are not sorted. It makes no difference if I indicate that the active or non-active data set is keyed. In either case the new variables are not properly matched with the cases.
What are some of the potential problems that might be causing this mismatch?
The dialog box pastes STAR JOIN syntax in some cases and MATCH FILES in others. There were some problems with STAR JOIN in older versions of Statistics, so you might need to use MATCH FILES instead. See the Command Syntax Reference for that command on how to do this.

How to locate the field that produces the “data type mismatch” exception?

I have a really long insert query with more than 40 fields (from an 'inherited' Foxpro database) processed using OleDb, that produces the exception 'Data type mismatch.' Is there any way to know which field of the query is producing this exception?
By now I'm using the force brute method of reducing the number of fields on the insert until I locate the buggy one, but I guess it must be a more straight way to found it...
There isn't really any shortcut beyond taking a guess at which 20 might be the problem, chopping out the other 20 and testing, and repeating that reductive process until you hit it.
Or alternatively looking at the table structure(s) in the DBF and making sure the field types match to the OleDB types you're using. The details of how .NET types are mapped to Visual FoxPro table field types is here.
If you have access to the Visual FoxPro IDE you could probably do that a lot quicker by knocking up a little program or even just doing it in the Command Window.
You are not telling us the language you use, so that we could possibly give a sample to handle it.
Basically what you would do is:
Get the structure,
Parse the insert statement and get values,
Compare data types.
It should be a short code to make this check.

File status 23 on READ after START

My question is pertaining to a file status 23, which according to MicroFocus means that upon my attempt to READ from a .DAT file:
"Indicates no record found."
or
"Indicates a duplicate key condition. Attempt has been made to store a
record that would create a duplicate key in the indexed or relative
file or a duplicate alternate record key that does not allow
duplicates."
I have eliminated the fact that the latter is my issue because I'm allowing duplicates in this case.
The reason I'm stumped is that I'm using a START to navigate to the record inside of my .DAT file, and when I execute a READ just after the START has positioned my file pointer, I get the file status 23.
Here is my code:
900-GET-INST-ID.
OPEN INPUT INST-MST.
MOVE FALL-IN-INST TO INST-NAME-REC.
START INST-MST
KEY EQUAL TO INST-NAME-REC
INVALID KEY
DISPLAY "RECORD NOT FOUND"
NOT INVALID KEY
READ INST-MST
MOVE INST-ID-REC TO WS-INST-ID
END-START.
CLOSE INST-MST.
So when I am running this code my START successfully runs and goes into the NOT INVALID KEY block, and then the very next line executes and my read is null. How can this be if my alternate key (INST-NAME-REC) is actually found inside the .DAT?
I have ensured that my FD picture clauses match exactly in the ISAM Build program and in this program (the reading program).
The second reason you show is excluded not because you allow duplicate keys, but because that error message with that file-status is for a WRITE, and your failure is on a READ.
Here's your problem:
READ INST-MST
Here's how you fix it:
READ INST-MST NEXT
In COBOL 85, the READ statement has two formats. Format 1 is for a sequential read and Format 2 is for a keyed (random) read.
Unfortunately, the minimum READ syntax for both sequential and keyed reads is:
READ file-name
Which means if you use READ file-name the compiler will implicitly treat it as Format 1 or Format 2 depending on your SELECT statement.
READ file-name NEXT RECORD is identical to READ file-name NEXT.
Consult your actual documentation for a full explanation and discovery of possible Language Extensions from the vendor. If you consult closely, the behaviour of READ file-name with no further option depends on the type of file. With a keyed file, the default is a keyed READ. You key field (luckily) does not contain a key that exists, so you get the 23.
Even if it didn't work like that, what would be the point of not using the word NEXT? The compiler always knows what you tell it (which sometimes is not what you think you tell it), but in a situation like this, the human reader can be very unsure. The last thing you want to do when bug-hunting is break off to look at the manual to discover exactly how that behaves, and then try to work it if that behaviour was the one sought by the original coder. The bug? A bug? Intended, but sloppy, code? No-one wants to spend that time, and look, even now, it is you.
A couple of comments on your code.
Look up the FILE STATUS clause of the SELECT. Use it. One field per file. Check after each IO. It'll save you grief.
Once using the FILE STATUS, ditch the imperative parts of the IO statements (the something/NOT something) and replace by tests of the file-status field (using 88s).
It looks like you are OPENing and CLOSEing your look-up file all the time. Please don't. OPEN and CLOSE can be very heavy and time-consuming, so do them once per program per file. If you've done that because of a problem, find a correct resolution to that problem, don't use a hack.
Drop the full-stops/periods except where they are needed. This is COBOL 85, which means for 30 years the number of full-stops/periods required in the PROCEDURE DIVISION have been greatly reduced. Get modern, and take advantage of that, it'll save you Gotcha!s as you copy/paste code, leaving the one which shouldn't be there and changing the way the program behaves.

Adding a field to an existing COBOL data file

I have an existing MF COBOL 4.0 program with years of data in a ISAM file but I need to add a new field to the existing file. The record currently has 1208 chars and I need to add another 10 to it.
If I simply put the extra PIC X(10) field in my copybook, it gives me an error.
You need to modify the underlying data file to match your file definition in COBOL. One way to do so would be to define a line of output exactly like what lines of your data look like now, but with an extra Pic x(10) on the end of it. You would then read in your data line by line, and output it to a new location with 10 extra spaces on the end of it. That way your data is 10 characters longer, and you can go back and add that extra Pic x(10) to your main program. It should work after that.
With changing the copybook, you're only changing the representation of the data used in your program. Shouldn't you be restructuring the data source (i.e. the ISAM file) as well?
Late answer, but I thought you might be interested.
I've been working on our Cobol system for over 20 years and we've come across this issue many times.
Changes to the structure of our index files are what we consider a "Major Release". These require specific Conversion programs which:
Rename the physical file, moving it aside to an 'old' file
Open the 'old' version of the file (using a version of the copybook before the change)
Open (create) the 'new' version of the file
Move the contents of each of the 'old' records to a 'new' record and WRITEs it
Of course these conversions require the system to be 'down', hence the reason why they are considered major releases.
If you have files which are likely to have fields added to them in the future, you can add extra FILLER to the end of index file to let you cope with new fields being added. We tend to add a FILLER of 50 or 100. Of course this doesn't help you if you change one of the existing fields, or even the structure of any of the keys.
For file errors, you will want to keep a list handy. I recommend starting with a list you find online, and any time you get an error you cannot figure out in 5 seconds, add a detailed explanation of the resolution so you will have it in your notes the next time it happens. Here are a couple decent lists to start with
http://www.simotime.com/vsmfsk01.htm
http://www.briarcliff.edu/departments/cis/Cobol/Error%20Codes.html
In my list, file status 39 is:
OPEN-CONFLICT-FILE-ATR - The 'open' statement was unsuccessful because a conflict was detected between the fixed file attributes specified for that file in the program. These attibutes include the organization of the file (sequential, relative, or indexed), the prime record key, the alternate record keys, the code set, the maximum record size, and the record type (fixed or variable).
And this is from the personalized note: Check the file that you have assigned to your ddname in your JCL. Especially the length allocation. In your case, you know that the length does not match, since you just changed the program.
There are utilities to reformat datasets, particularly SYNCSORT. Or of course you can write your own.

Resources