Parametrize Connections from Database parametrization table - database-connection

We are using Pentaho Data Integration V7 working with multiple data origins with an Oracle DWH destiny.
We have stored all the connection access data in a parametrization table, let's call it : D_PARAM. All the connections are configured using parameters (${database_name} ... etc)
We have , at the begining of every job , a transformation with a "set variables" step which reads the right parameters from D_PARAM.
This all works fine, my problem is :
Every time we want to edit a single transformation, or in the development process of a new one , we can't use the paremetrized connections because the parameters haven't been setted. We need then to use "hardcoded" connections during the development process.
Is there a better way to manage this situation ? The idea of having the connections parametrized is to avoid errors and simplify the connections management, but if at the end we need both kind of connections.. I don't see them so useful.

There's not a simple answer, you could rotate your kettle.properties file to change default values, you keep all the values in the file:
D_PARAM = DBN
D_PARAM_DB1 = DB1
D_PARAM_DB2 = DB2
...
And just update the D_PARAM with the one you need from the different D_PARAM_DBN before starting PDI. It's a hassle to be constantly updating the kettle.properties file, but works out of the box.
You could also try working with environments, for this you would have to install a plugin available in Github: https://github.com/mattcasters/kettle-environment, it was created by a former PDI developer, and I don't know if it works with v7 version, it was updated to work with 8.2, but it would probably work with v7, to test it, you can install your PDI version on another directory on your PC and install there the plugin (and other additional plugins you have in your current installation), so you don't break your setup. This blog entry gives you details on how to use the environments: http://diethardsteiner.github.io/pdi/2018/12/16/Kettle-Environment.html
I don't know if the environments plugin would solve your problem, because you can't change the environment in the middle of a job, but for me, with the maitre script to use the environments when I program a job or transform, it's been easier to work with different projects/paths in my setup.

In Spoon you can click on the “Edit” menu and “Set environment variables”. It’ll list all variables currently in use and you can set their values. Then the transformation will use those values when you run.
Also works in Preview, but it’s somewhat buggy, it doesn’t always take updated values.

Related

Umbraco Bi-directional Deployment

I'm using Umbraco 7.4.x. I've been trying to figure out the best way to do bi-directional deployments.
As in, we have more than one dev working locally, and we have a dev server and a live server. We have single click deploys from local to dev, but that's only code. We were copying up the databases to dev, but now we also have people who need to enter content on dev. This leads us to making changes on dev database as well and copying down the database. We do all this with Version control of course, but still, this is all very inconvenient.
Is there a better approach to this that I'm missing? I tried using usync a few months ago but we'd often run into crashes.
I have heard of Courier, it seems like it would be good for deploying from dev/stage to production, but would that also work for pushing content/doc type changes to our local machines? I wasn't sure as they're not web servers on the internet but just local IIS Express running through Visual Studios
Thanks in advance!
We use uSync (uSync + uSync.ContentEdition - https://our.umbraco.org/projects/developer-tools/usync/) for moving everything between instances. Give it another shot as it has changed from the point when you're exploring it in the past. It's worth to mention that it requires good configuration on different enviroments to avoid conflicts etc.
You can also use Courier and it's latest version is used by Umbraco Cloud (http://umbraco.io/) which may also interest you as it gives you full control over deployment processes between multiple Umbraco instances.
One option is to have all of your developers set up to work off of the same dev database. On occasion, your developers might have to "Republish the entire site" or reindex the examine indexes to make sure all their cache and TEMP file are up to date. Otherwise, this has worked well for us for many years. One frustrating part of this is that media files uploaded by dev A won't be immediately on the file system for dev B. You should be able to move your media to azure blob storage to work around this problem. There is a package that should help set this up here.
I wouldn't recommend uSync.ContentEdition. I haven't tried it personally, but I have yet to hear a good report about it. uSync on the other hand has been a life saver for us even if it isn't perfect. At this point, we install usync on every site even if we never configure it to read in changes. We like that we can record our changes to document types and datatypes in source control. Working with the shared database setup means that we don't need usync to be reading on our dev and local environments. However, you will need to make sure that your devs all understand usync. If dev A adds a doc type, the usync .def file for that doc type could show up on the file system for dev B. Dev B should not commit that usync file in that situation.
Courier has been working a lot better recently. I wouldn't recommend it unless you are running umbraco 7 and can get the latest version of Courier. Courier is very useful, but you should do a lot of testing with it before you hand it over to a client because Courier gives you the ability to shoot yourself in the foot in a big way. It has definitely improved. In Courier for umbraco 6 I used to have to try really hard to deploy without breaking my site. Now, in Courier for umbraco7, I have to try really hard to break it. This is now a viable option for deploying content changes to production. Just make sure you test it heavily before you use it in a production environment.

Progress ABL How to Test for WEBSPEED in the PRE-PROCESSOR

I want to conditionally compile some blocks of code depending on type of client i'm running in. this is fine for batch and tty as i can use the {&BATCH-MODE} but how to test for when the code is being compiled in webspeed agent? eg. {&IF} not {&SOMETHING} EQ "YES" {&THEN}
{&ANALYSE-SUSPEND}
foo
bar
{&ANALYSE-RESUME}
{&ENDIF}
it would be helpful if this did not rely on defines auto generated by the architect in .w's etc but that would be a nice to have not essential.
Compile time isn't run time. If the program can be run different ways (as a part of a of webpage using webspeed, as a part of a batch and as a part of some other kind of client etc) you're most likely better of evaluating this in run time instead.
You can identify in what environment you're running:
SESSION:CLIENT-TYPE
This will identify your type of client.
DISPLAY SESSION:CLIENT-TYPE.
Type of client Attribute value
-------------------------------- -----------------------
ProVision standard ABL client 4GLCLIENT
WebClient WEBCLIENT
AppServer agent APPSERVER
WebSpeed agent WEBSPEED
Pacific Application Server agent MULTI-SESSION-AGENT
Other special-purpose clients Unknown value (?)
Documentation
Using VST
If you have at least one database connected
_Connect-ClientType tells you what kind of client this particular connection is:
Value Client
-------- ---------------------
ABL ABL client
SQLC SQL client
WTA Webspeed agent
APSV AppServer agent
SQFC SQL Federated client
Example:
FIND FIRST _myconnection NO-LOCK.
FIND FIRST _connect NO-LOCK WHERE _connect._connect-usr = _myconnection._MyConn-userid.
DISPLAY _connect._Connect-ClientType.
Based on OS
Perhaps you run different OS:es?
DISPLAY OPSYS.
Other ways
There's a number of other ways of doing this, including perhaps looking at PROPATH, Working directory etc.
Try to stick with a solution that won't change over the course of time because of Progress upgrades, new OS:es, new directory structures etc.
IMHO there is no such preprocessor variable out of the box.
But you could create your own include file and include that in the code that's relevant. You need two versions of that file, one says
&GLOBAL-DEFINE WebSpeed WebSpeed
and the other
&GLOBAL-DEFINE NoWebSpeed NoWebSpeed
And then configure your compile sessions so that they find exactly one of the files in propath.
But as you will agree, this is probably dangerous as the result will heavily rely on the proper PROPATH used during compilation. I'd rather attempt to use a runtime condition instead.
What are you trying to achieve in detail?
finally figured it out this morning {&webstream} and {&out} are not defined in in normal sessions so i can just test for that. runtime is not an issue in my case i just want to compile the code in all cases. in this shop dont ask me why but every single piece of code is session compiled. poor cpu but there u go. i could be defensive and add some logic with session:Client-type for bells and whistles you're right. if not can-do then boogie :)

May I use sf_sandbox directly as my symfony project?

As sf_sandbox has set up the symfony environment, why not develop in the sandbox directly and then upload on to server? What are the disadvantages of sandbox compared with configuring manually?
I think there is no drawback in following this approach. sf_sandbox is a pre-configured symfony project. One of the pluses is that is saves you time in creating your project and initializing an empty application (by default this is called frontend).
It's more a matter of taste rather than a matter of right or wrong. It's up to you!
Note: If you follow this approach you have to make some initial configuration (steps 1,2,3 would be done anyway if you started your project from scratch):
Rename the project
Change the config/properties.ini file
Change the config/databases.yml file (by default sf_sandbox uses sqlite database)
Remove the data/sandbox.db database file

Modifying Grails apps post deployment

I'm investigating Grails vs. other Agile web frameworks, and one key use case I'm trying to support is the ability to modify controllers and install plugins post deployment. It appears that this isn't possible with Grails, but I want to make sure before I write it off.
As far as modifying controllers goes, it would be sufficient if the Groovlet behavior existed (compile-on-demand).
As far as plugin installs go, I understand this may be a long shot, but I thought I'd check to be sure.
For your information, I need this because I work on a product that requires a little site-specific customization, such as adding validation of simple meta-data, integrating with customer security environments, and maybe even including new controllers/pages quickly.
Out of the box, no, grails doesn't really support what you want. There may be ways to customize it but I've never looked into it. A PHP framework might be more of your ally since there is no real deployment process other than copying PHP files to a location.
That said, I personally would prefer a strict set of deployment policies. And honestly, deploying changes with Grails is as simple as running the 'grails war' command and copying that war to your servlet container. The site's downtime is negligible and if you have multiple web servers with a load-balancer, your customers should never see down time due to deployments.
Although it's not recommended for complex coding; You could execute groovy code from a string that you could store in database or a file on the fly at run time:
check out Groovy template engine:
http://groovy.codehaus.org/Groovy+Templates
but even then, you are still limited on what you can do or can't do let alone debugging will lack. you may want to consider an interpreted language; few to mention PHP/Perl/Coldfusion.

Switch between development version and live version

Does anyone have any good techniques for easily switching between development and live builds for asp.net mvc websites? Every time I make some changes I need to change to go through my web.config and comment out all my local stuff and uncomment all my remote settings. I also need to update the linq-to-sql dbml file to point to the right connection string.
This happens every time I make a change in a controller. After I upload the changes I then have to do the same process and get it back to development (local) mode.
Is there an automatic way to handle this, or at least one setting that can flag between the two?
Thanks
a way I've done is to make two groups of configuration settings
<LiveSomeSetting>something</LiveSomeSetting>
<TestSomeSetting>anotherthing</TestSomeSetting>
Then in my class that reads configuration info, I'd check on something like the system environment / computer name and if applicable to the name of your LIVE machine look at the settings... otherwise the development.
if (System.Environment.MachineName.ToLower().StartsWith("devMachineName"))
IsLive = "Test";
else
IsLive = "Live";
Application["IsLive"] = IsLive;
SomeSetting = ConfigurationManager.AppSettings[IsLive + "SomeSetting"];
I use a simple technique that Scott Hanselman blogged in '97. It basically involves maintaining separate web.config files for each of you build types. Then there is a pre-build event which copies the correct web.config into place.
It does have the down side of having to maintain 2+ web.config files but once you're up and running it isn't really that big an issue.
Have a look here at the article:
Managing Multiple Configuration File Environments with Pre-Build Events
HTHs,
Charles

Resources