Test database connection before step initialization in Pentaho Kettle? - database-connection

I am currently working on a transformation in Pentaho Kettle. I have numerous steps all depending on the same database connection. The username and password are provided by the user as parameters. If the wrong credentials are provided, every single step that is dependent on the database connection fails to initialize and logs a separate error about it. This results in a great wall of scary red text, that I am afraid will be quite hard to interpret for the intended end user of the transformation.
So, is there any way to test the database credentials before the other steps are initializing and then log a single informative error message if they are incorrect?

You can try Check Db connections step in PDI Job. Here you can easily give all the connections that you are using in the job. Finally based on the connection setting, you can define your logic flow.
Hope it helps :)

Related

AdsSys forget password -- how to reset?

I'm being assigned to maintain an Advantage-based Clipper program. I tried to use Advantage Data Architect program to connect to the database, but failed. I'm thinking the password I got is wrong... maybe the previous maintainer or somebody else changed it... (it worked on the test server but not on the live server).
So I'm just wondering if it's possible to reset the password without reinstall & lose all the data?
Thanks.
Advantage website states:
ADSSYS
"All Advantage data dictionaries contain an administrative user called ADSSYS. This user has permissions to perform any operation or update on the dictionary. Be aware that if the ADSSYS password is lost, it cannot be recovered or reset."
NOTE: When the ADSSYS account is created, by default the password is blank; so you may first attempt to leave the password blank.
I suggest contacting Advantage and asking them for the resolution; as this situation has likely occurred before (and will occur again).

Setup TFS/Test Manager to send email on test failure

Is it possible to setup TFS/Test Manager so that it sends out an email after a test fails?
Yes, it is possible but it requires quite a lot of changes/additions to the process template and possibly a custom-made activity.
After tests have run, we check if BuildDetail.BuildPhaseStatus has status failed
We send mail to everyone who has changesets committed to this build, so the build goes through BuildDetail.AssociatedChangesets (you need to have AssociateChangesetsAndWorkItems on) and get the committer username.
Unfortunately for us, there's no good correlation between TFS username and email address at our place, so we had to create a custom activity that looks that up in the AD.
The actual email is sent with the BuildReport action from Community TFS Build Extensions. We modified the xslt, but that's not really necessary. We also wanted to include a listing of the failed tests, and that required modification of the action itself (test data isn't included by default).
Looking at this description and all the work made to get this working, I'm beginning to wonder if it was worth it ;).

Dealing with service dependencies that time out or fail

I have writen a windows service that overwrite Logon and Logoff methods of ISenesLogon2 to check out when logon and logoff occure and then insert the log information into the sql server on server computer.
But it has problem when i turn on the client computer just after the server.
In this situation my service could not insert in sql server.
I think it's because of that the sql server did not load completely before the winservice tried to access to it.
So i want to find a way to check programmatically if the sql server is ready and then try to work with?
Your service cant start until its dependencies remote or otherwise have also started. Checking SQL Server is easy, try and connect to it and retry until you succeed.
Only problem is services have timeouts on startup, you cant sit and repeat this indefinitely.
Things that cannot be reliably started in a reasonable timeframe should not be services or they should fail as soon as possible. Otherwise you will end up with a lot of support requests for your service timing out.
Services are also usually not interactive to the user, so the failure is worse because you cant directly tell the user that your not up unless you do a tray icon.

TADOQuery SQL.add() submitting/preparing the sql

Overview:
I have written an application that allows a user to define a query, submit it to a server and view the results. The software can run on DB2 or MySQL.
Problem:
We've had issues in the DB2 version where a user has tried to run a query, and found that it has failed because their user profile has been disabled. In order to run a query on DB2 (on an IBM i), the user's profile name and password are provided in the connection string. Security on the server can specify that a user's profile is disabled after two or three incorrect logins.
Question:
I've debugged the application and found that the problem is down to the query being submitted twice. If the user's password is wrong, then of course, this is having the knock-on effect of disabling their profile.
On further inspection, when I've inspected the logs on the server (while debugging line by line), I've found that the query is submitted to the server when you call TADOQuery.sql.add(), and again when the TADOQuery's active propery is set to true (which is the point at which I would expect the query to be submitted to the server). Here's an example of the code that I'm using to run the query:
adoqry.active := false;
adoqry.sql.clear;
adoqry.sql.add('SELECT * FROM SOMEDB.SOMETABLE');
adoqry.active := true;
My question is therefore quite simple:
1. Why does the TADOQuery.sql.add() method submit the query (when it should just be adding the sql to the TADOQuery's sql property)?
2. What can I do to prevent this? i.e. is there any way to prevent the sql being submitted when I call the add() method?
For those of you that would like extra information about the logs, the exit point logs on the IBM i show that when I call adoqry.sql.add in the above example, the query is run through the "Database Server-SQL Requests" exit point application, via function "Prepare and Describe". When I call adoqry.active := true in the above example, the same query goes through the same exit point application, but via the "Open/Describe" function.
If you're not familiar with the IBM i, don't worry about it - I'm just including that information as proof that I have traced the query being submitted twice. The real issue is with the TADOQuery's sql.add() processing.
From your description of your problem, I assume you specify the ConnectionString of the ADOQuery. Doing this combines the database login with the running of the query. You have found that this has undesirable side effects when the user's credentials are invalid.
Separate the database login from the query by using an ADOConnection. Specify the ConnectionString of the ADOConnection and assign the ADOConnection to the ADOQuery.Connection property. This way, you control the database login and can catch logins with bad credentials. Additionally the ADOConnection.Open method allows you to specify the username and password so you do not have to put them in the ConnectionString.
While this does not answer you specific questions, this approach will help you solve the problem of the user's profile being disabled by separating the login from the running of the query.

Replay Ruby on Rails logs including parameters and session information?

JMeter's Access Log Sampler requires common log format logs to replay http requests. I want to use it to replay actions in a Rails application from the log, including params passing. Is there any way to do this with JMeter or any other tool? I suppose I could parse the logs into curl requests, but can this way maintain session information (keeping track of which user performed which action)?
Edit I should say what this is for. It can be useful for either performance testing or data recovery. In our case we need to verify some data in the database by using our logs because the db may have data integrity issues.
I am looking at paper_trail to get this kind of functionality in the future. For the app in question, we had to do some heavy-duty log parsing to get the results we needed. It included separating the actions out by IP address (thus similating sessions, although some IP addresses contained multiple user sessions) and parsing the actions and params in the logs. It was not 100% effective at reproducing the exact state of the database, but it was pretty close.
HTTP Raw Request for JMeter may help with this

Resources