I need to create a batch file, i.e., batch.bat.
When we execute this file:
It will open SQLPlus
It will call a file called file1.sql.
file1.sql contains the creation and insertion scripts for a particular user.
I have file.sql, but what I need to know is, how do I create a batch file to perform this function?
A little test script (stored in file named test.sql):
select 1 from dual;
exit;
... and to execute it:
sqlplus user/password #test.sql
Related
I need to execute a script (Informix code) in a .sql file for migration purposes. The thing is, I want to load it from a function to be able to use the exception, therefore being able to do a rollback in case of an error.
So, this is the code (still experimenting):
DROP FUNCTION IF EXISTS "informix".SCRIPT_MIGRATION();
CREATE FUNCTION "informix".SCRIPT_MIGRATION()
RETURNS BOOLEAN as RESULT;
DEFINE lv_execute lvarchar(32739);
DEFINE li_errnum, li_eisam INT;
DEFINE lv_errtxt CHAR(200);
ON EXCEPTION SET li_errnum, li_eisam, lv_errtxt
ROLLBACK;
CALL regista_log('script_migration', get_session_user(), li_errnum, lv_errtxt);
RETURN 'f';
END EXCEPTION;
CALL set_isolation_level();
BEGIN;
LET lv_execute = 'LOAD FROM ''C:\Users\Admin\Desktop\ConstaWeb_Stuff\test.sql'' DELIMITER ''+'' INSERT INTO SCRIPT_MIGRATION_TEMP_TABLE;';
DROP TABLE IF EXISTS SCRIPT_MIGRATION_TEMP_TABLE;
CREATE TABLE SCRIPT_MIGRATION_TEMP_TABLE(
STRING_TO_EXECUTE LVARCHAR(31739)
);
EXECUTE IMMEDIATE lv_execute;
COMMIT;
RETURN 't';
END FUNCTION;
CALL SCRIPT_MIGRATION();
That's because we apparently can't execute the load command inside functions. So I'm trying to execute it. But I'm not getting it right, apparently...
The objective here is to execute the script (not a shell command script! it's an Informix script, like creates, loads, unloads, drops...) on a file. I'm open to other ways of doing this.
I'm relatively new to Informix so I'm sure there is still a lot I don't know about it.
As already noted, the LOAD command is not a command recognized by the Informix server. Client products emulate an SQL statement by recognizing the syntax and reading the file and executing appropriate SQL statements. Changing the way you (try to) execute it in a function executing in the server will not help.
Using a shell script instead may help.
If you're migrating an existing Informix database to a new location (machine, version of Informix), then using DB-export and DB-Import may be a good way to go.
The DB-Access command is the 'standard' way to execute scripts from a shell script. You'd need to ensure you set the DBACCNOIGN environment variable to 1. That will then stop if there's an error during the LOAD and rollback the transaction. There's also the DB-Load command, but it will be harder to rollback DDL statements since it does not handle those.
Alternatively, you might find my SQLCMD* program useful — though it too isn't perfect. However, unlike DB-Access, it allows you to control which statements can generate errors that are ignored and which are not (continue [on|off|push|pop]; before and after as appropriate).
With careful packaging, you can use it to create your migration, assuming the DB-Export and DB-Import won't do the job for you automatically.
* You may have to subscribe to the IIUG to get at this. The registration is not onerous, and neither is the email load.
Since Ranorex does not provide re-run functionality from under the hood, I have to write my own and before I started, just want to ask for advice from people who've done it or maybe possible existing solution on the market.
Goal is:
In the end of the run, to re-run failed test cases.
Requirements:
Amount of recursive iterations should be customized
If Data binding is used, should include only Iterations for Data binding that failed
I would use the Ranorex command line argument possiblities to achieve this. Main thing would be to structure the suit accordingly that each test-case could be run seperately.
During the test I would log down the failed test cases either into a text file, database or any other solution that you can later on read the data from (even parse it from the xml result if you want to).
And from that data you'll just insert the test-case name as a command line argument while running the suite again:
testSuite.exe /testcase:TestCaseName
or
testSuite.exe /tc:TestCaseName
The full command line args reference can be found here:
https://www.ranorex.com/help/latest/lesson-4-ranorex-test-suite
Possible solutions:
1a. Based on the report xml: Parse report and collect info about all failed TC.
Cons:
Parse will be tricky
or:
1b. Or create list of failed TC on runtime : If failure occurs on tear-down add this iteration to the re-run list (could be file or DB table).
Using for example:
string testCaseName = TestCaseNode.Current.Name;
int testCaseIndex = TestSuite.Current.GetTestCase(testCaseName).DataContext.CurrentRowIndex;
then:
2a. Based on the list, run executable with parameters, looping though each record.
like this:
testSuite.exe /tc:testCaseName tcdr:testCaseIndex
or:
2b. Or generate new TestSuite file .rxtxt and recompile solution to created updated executable.
and last part:
3a. In the end repeat process, checking that failedTestCases == 0 || currentRerunIterations < expectedRerunIterations with script through CI run executable
or:
3b. Wrap whole Test Suite into Rerun test module and do the same check for failedTestCases == 0 || currentRerunIterations < expectedRerunIterations and run Ranorex from TestModule
Please let me know what you think about it.
I have a script control which splits my data down two file streams and into two File Destinations. Unfortunately even if there are no records for one of the streams the destination file Is still created. How can this be prevented.
Well this is how the SSIS behaves i.e. creating destination files even if there is no data. You would need a workaround to get this through -
Insert a 'Row Count' transformation between Script Task and your 'Destination File' and assign a variable to it say #intNumberOfRows.
Connect the Data Flow Task to a 'File Task' to delete the empty file created by using a conditional expression based on the value of #intNumberOfRows set to 0.
I need some help with the following issue:
Inside a foreach loop, I have a Data Flow Task that reads each file from the collection folder. If it fails while proccesing a certain file, that file is copied to an error folder (using a file system task called "Copy Work to Error").
I would like to set up a Send Email Task that warns me if there were any files sent to the error folder during the package execution. I could easily add this task after the "Copy Work to Error" task, but if there are many files that fail the Data Flow Task, my inbox would get filled.
Instead, I would like the Send Mail Task only once (after the foreach loop completes) only if the "Copy Work to Error" task was executed at least once. Is there any way I could achieve that?
Thanks,
Ovidiu
Here's one way that I can think of:
Create an integer variable, #Total outside the ForEach container and set it to 0.
Create an integer variable, #PerIteration inside the ForEach container.
Add a Script Task as an event handler to the File System Task. This task should increment #Total by #PerIteration.
Add your SendMail task after the ForEach container. In the precedence constraint, set type to Expression, and specify the condition #Total > 0. This should ensure that your task is triggered only if the File System Task was executed in the loop at least once.
You could achieve this using just a boolean variable say IsError created outside the scope of the for each loop with default value as False. You can set this to True immediately after the success of Copy Work to Error task using an expression task(SSIS 2012) or an Execute SQL task. And finally your Send Mail task would be connected to the For Each loop with the precedence constraint set as the Expression - isError.
When the error happens, create a record in a table with the information you would like to include in the email - e.g.
1. File that failed with full path
2. the specific error
3. date/time
Then at the end of the package, send a consolidated email. This way, you have a central location to turn to in case you want to revisit the issue, or if the email is lost/not delivered.
If you need implementation help, please revert back.
Is there any way in Netezza Stored Procedure to put output into a file? By output I mean statements from Notice statement in Netezza. Also we can't use any shell or other script because the SP is being invoked via a tool which cannot execute shell scripts but only make DB calls.
There are multiple ways to write these values to a file -
1) Write UDX which can put your statements to disk
2) Every raise notice statement gets log in "/nz/kit/log/pg.log" file, so you need to write a bash which runs with cron or similar tool and extract required information for you.