Progress ABL How to Test for WEBSPEED in the PRE-PROCESSOR - preprocessor

I want to conditionally compile some blocks of code depending on type of client i'm running in. this is fine for batch and tty as i can use the {&BATCH-MODE} but how to test for when the code is being compiled in webspeed agent? eg. {&IF} not {&SOMETHING} EQ "YES" {&THEN}
{&ANALYSE-SUSPEND}
foo
bar
{&ANALYSE-RESUME}
{&ENDIF}
it would be helpful if this did not rely on defines auto generated by the architect in .w's etc but that would be a nice to have not essential.

Compile time isn't run time. If the program can be run different ways (as a part of a of webpage using webspeed, as a part of a batch and as a part of some other kind of client etc) you're most likely better of evaluating this in run time instead.
You can identify in what environment you're running:
SESSION:CLIENT-TYPE
This will identify your type of client.
DISPLAY SESSION:CLIENT-TYPE.
Type of client Attribute value
-------------------------------- -----------------------
ProVision standard ABL client 4GLCLIENT
WebClient WEBCLIENT
AppServer agent APPSERVER
WebSpeed agent WEBSPEED
Pacific Application Server agent MULTI-SESSION-AGENT
Other special-purpose clients Unknown value (?)
Documentation
Using VST
If you have at least one database connected
_Connect-ClientType tells you what kind of client this particular connection is:
Value Client
-------- ---------------------
ABL ABL client
SQLC SQL client
WTA Webspeed agent
APSV AppServer agent
SQFC SQL Federated client
Example:
FIND FIRST _myconnection NO-LOCK.
FIND FIRST _connect NO-LOCK WHERE _connect._connect-usr = _myconnection._MyConn-userid.
DISPLAY _connect._Connect-ClientType.
Based on OS
Perhaps you run different OS:es?
DISPLAY OPSYS.
Other ways
There's a number of other ways of doing this, including perhaps looking at PROPATH, Working directory etc.
Try to stick with a solution that won't change over the course of time because of Progress upgrades, new OS:es, new directory structures etc.

IMHO there is no such preprocessor variable out of the box.
But you could create your own include file and include that in the code that's relevant. You need two versions of that file, one says
&GLOBAL-DEFINE WebSpeed WebSpeed
and the other
&GLOBAL-DEFINE NoWebSpeed NoWebSpeed
And then configure your compile sessions so that they find exactly one of the files in propath.
But as you will agree, this is probably dangerous as the result will heavily rely on the proper PROPATH used during compilation. I'd rather attempt to use a runtime condition instead.
What are you trying to achieve in detail?

finally figured it out this morning {&webstream} and {&out} are not defined in in normal sessions so i can just test for that. runtime is not an issue in my case i just want to compile the code in all cases. in this shop dont ask me why but every single piece of code is session compiled. poor cpu but there u go. i could be defensive and add some logic with session:Client-type for bells and whistles you're right. if not can-do then boogie :)

Related

Parametrize Connections from Database parametrization table

We are using Pentaho Data Integration V7 working with multiple data origins with an Oracle DWH destiny.
We have stored all the connection access data in a parametrization table, let's call it : D_PARAM. All the connections are configured using parameters (${database_name} ... etc)
We have , at the begining of every job , a transformation with a "set variables" step which reads the right parameters from D_PARAM.
This all works fine, my problem is :
Every time we want to edit a single transformation, or in the development process of a new one , we can't use the paremetrized connections because the parameters haven't been setted. We need then to use "hardcoded" connections during the development process.
Is there a better way to manage this situation ? The idea of having the connections parametrized is to avoid errors and simplify the connections management, but if at the end we need both kind of connections.. I don't see them so useful.
There's not a simple answer, you could rotate your kettle.properties file to change default values, you keep all the values in the file:
D_PARAM = DBN
D_PARAM_DB1 = DB1
D_PARAM_DB2 = DB2
...
And just update the D_PARAM with the one you need from the different D_PARAM_DBN before starting PDI. It's a hassle to be constantly updating the kettle.properties file, but works out of the box.
You could also try working with environments, for this you would have to install a plugin available in Github: https://github.com/mattcasters/kettle-environment, it was created by a former PDI developer, and I don't know if it works with v7 version, it was updated to work with 8.2, but it would probably work with v7, to test it, you can install your PDI version on another directory on your PC and install there the plugin (and other additional plugins you have in your current installation), so you don't break your setup. This blog entry gives you details on how to use the environments: http://diethardsteiner.github.io/pdi/2018/12/16/Kettle-Environment.html
I don't know if the environments plugin would solve your problem, because you can't change the environment in the middle of a job, but for me, with the maitre script to use the environments when I program a job or transform, it's been easier to work with different projects/paths in my setup.
In Spoon you can click on the “Edit” menu and “Set environment variables”. It’ll list all variables currently in use and you can set their values. Then the transformation will use those values when you run.
Also works in Preview, but it’s somewhat buggy, it doesn’t always take updated values.

F# Type Providers and Continuous Integration, Part 2

This is a (actually it is several) follow-up question to my previous question on F# Type Providers and Continuous Integration.
It seems to me that it would be a good idea to use the SqlDataConnection type provider as a compile-time check that the code/database integrity remains intact in feature-branch driven development; you would know at every commit/build that no changes have been made to the code that has not also been applied to the database, assuming that building the database is also a part of your CI process.
However, a couple of questions arise:
The name (as well as the location) of the config file is not the same at compile time as at runtime, e.g. app.config -> MyApp.exe.config, which will result in a runtime error if you try to use
SqlDataConnection<ConnectionStringName="DbConnection", ConfigFile="app.config">
(Actually, specifying ConfigFile="app.config" is not necessary, since it is the default value.)
The runtime error can be avoided by copying the app.config file to the output directory (there’s a setting for that), but that would result in having both an app.config and a MyApp.exe.config file in the output directory. Not very pretty. Adding a separate configuration file for type providers would be another solution, but imho that’s not very pretty either.
Question: Has anyone come up with a more elegant solution to this problem?
The next problem arises when you come to the build server. It is most likely that you don’t want to compile against the same database as you did while developing, thus requiring a different connection string. And yes, in production you’d need yet another one.
Question: How do you go about solving this in the most convenient way? Remember, the solution has to be a working part of a CI process!
This strategy would require generating the database on each build at the build server, probably from a baseline script with some feature/sprint update scripts.
Question: Has anyone tried this and how did it affect build times? If yes, how did you create this step?
At runtime you can use the GetDataContext overload that accepts a connection string. See here: Providing connection string to Linq-To-Sql data provider
I am familiar with the solution that #Gustavo proposes, but I’ve always felt that there is something fishy about it. Something about having to specify the connection string twice… However, when I once again got the same reply, I slowly started to realize that the answer must be correct, and that it’s me who’s thinking wrong. These are my conclusions:
The reply states that you can use the GetDataContext overload that accepts a connection string. If you change this to should, things become clearer, at least to me.
The thing is that I’ve been thinking of the class definition as both a compile-time directive and as a run-time variable, but even though this is possible, it is hardly a good idea. The default value of the ConfigFile argument is as we’ve already seen “app.config”, but that file doesn’t exist (with that name) at run-time, so it doesn’t make sense to try to use it, thus leaving GetDataContext as the only reasonable option. I suggest that you do this:
Keep your compile-time settings in a file called compilation.config (or whatever you prefer) and specify this file for usage in the class definition.
Use the GetDataContext overload that accepts a connection string for run-time resolution and specify this connection string in the app.config file.
You will end up with something looking like this:
type private dbSchema = SqlEntityConnection<ConnectionStringName="DbConnection", ConfigFile="compilation.config">
let private db = dbSchema.GetDataContext(ConfigurationManager.ConnectionStrings.["DbConnection"].ConnectionString)
Heck, this even supports the SRP; compile-time settings in one file and run-time settings in another!
Regarding the rest of my questions, I assume that the answer lies in some scripting in the Continuous Deployment pipeline. Still interested in other solutions though.

SSIS Data Flow for a Noob

I'm still very new at programming, and our local SSIS genius isn't here today for me to pick his brain.
I am working on an existing SSIS package and am making modifications to a specific .dtsx file. The data flow has an OLE DB source, which I have successfully changed the sql query to fit my project specs. The Destination is a connection flat file, which I have modified the column mappings to fit the new query.
I have a few concerns:
The source connection originally used SQL Server Authentication, and I don't have the user name or password. I can use Windows Authentication to test it locally, but in the end it will be set up by someone else as a scheduled task on a server somewhere. (I realize this is probably a question for people at my work, but I figured I would fill you guys in).
The destination preview doesn't show anything. I can, however, successfully parse and preview the Source query...
I also don't understand what "Error Output" means on the Source Editor.
Is this set up correctly already, or does it mean there will be some errors in the output?
Any explanations or elaborations would be helpful, but my overall question is: "Am I missing something for this .dtsx, or is this project finished and ready to be set up as a scheduled task?"
It will depend on the package configuration. usually user\password are read from a configuration mechanism (file or server)
Yes, it should be fine
It means what should the task do when it finds an error. It can fail the component or ignore the error for example

Modifying Grails apps post deployment

I'm investigating Grails vs. other Agile web frameworks, and one key use case I'm trying to support is the ability to modify controllers and install plugins post deployment. It appears that this isn't possible with Grails, but I want to make sure before I write it off.
As far as modifying controllers goes, it would be sufficient if the Groovlet behavior existed (compile-on-demand).
As far as plugin installs go, I understand this may be a long shot, but I thought I'd check to be sure.
For your information, I need this because I work on a product that requires a little site-specific customization, such as adding validation of simple meta-data, integrating with customer security environments, and maybe even including new controllers/pages quickly.
Out of the box, no, grails doesn't really support what you want. There may be ways to customize it but I've never looked into it. A PHP framework might be more of your ally since there is no real deployment process other than copying PHP files to a location.
That said, I personally would prefer a strict set of deployment policies. And honestly, deploying changes with Grails is as simple as running the 'grails war' command and copying that war to your servlet container. The site's downtime is negligible and if you have multiple web servers with a load-balancer, your customers should never see down time due to deployments.
Although it's not recommended for complex coding; You could execute groovy code from a string that you could store in database or a file on the fly at run time:
check out Groovy template engine:
http://groovy.codehaus.org/Groovy+Templates
but even then, you are still limited on what you can do or can't do let alone debugging will lack. you may want to consider an interpreted language; few to mention PHP/Perl/Coldfusion.

Why is WSCript object not known to my script that's controlled by a custom IScriptControl?

I am using someone else's library that provides its own scripting host instance, it appears.
This lib provides me with functions to define the type of scripting language such as "jscript" and "vbscript", and I can supply it with script code and have that executed, with passing arguments in and back. So, basically, it works.
However, when I try to access the "WScript" object, I get an exception saying that this keyword is undefined.
The developer, not knowing much about this either (he only made this lib for me because I do not want to deal with Windows SDKs right now), told me that he is using "IScriptControl" for this.
Oh, and the lib also provides flags to allow "only safe subset" and "allow UI", which I set to false and true, respectively.
Does that ring a bell with anyone? Do a user of IScriptControl have to take extra steps in order to make a WScript object available? Or, can he use IScriptControl in a way that this is supplied automatically, just as when running the same script from wscript.exe?
Basically, all I need is the WScript.CreateObject function in order to access another app's API via COM.
I don't know why WScript is not known, but I suspect it is because the script host doesn't provide it. Maybe only wscript.exe does this.
If you are using Javascript, to create an object you can use new ActiveXObject(). If you are using VBScript, you can just use CreateObject.
See this article for some background.

Resources