WebSphere configuration with ant & jython - ant

I have been able to successfully configure resource environment entries with the Jython script below. I call the Jython script with the ws_admin program in my local app servers bin directory.
I work on a team where ant is the preferred technology in our build process.
I've looked around the web for documentation on configuring WebSphere with ant and so far it looks like to me that one is mainly able to call programs like ws_admin from ant.
Is it possible to configure resource environment entries using ant directly instead of using a Jython or Jacl script? If not, how can I go about setting up an ant task to reduce the amount of Jython that is needed to set up resource environment entries?
Here's my current Jython script that sets up resource environment entries. Ultimately looking for ways to reduce our dependence on Jython...
# Set up Variables used within this script
objServerAttrs = AdminControl.completeObjectName('WebSphere:type=Server,*')
node = AdminControl.getAttribute(objServerAttrs, 'nodeName')
server = AdminControl.getAttribute(objServerAttrs, 'name')
provider = "Test_ConfigurationProvider"
providerFactory = "com.DG_ConfigurationFactory"
providerClass = "com.DG_Configuration"
# Function for creating resource custom properties
def createResourceCustomProperty(envEntry, propName, propValue):
propSet = AdminConfig.showAttribute(envEntry, 'propertySet')
if propSet == None:
propSet = AdminConfig.create('J2EEResourcePropertySet',envEntry,[])
name = ['name', propName]
value = ['value', propValue]
propAttrs = [name, value]
AdminConfig.create('J2EEResourceProperty', propSet, propAttrs)
return
# Create the resource environment provider
AdminResources.createResourceEnvProvider(node, server, provider)
AdminResources.createResourceEnvProviderRef(node,server,provider, providerFactory, providerClass)
# Create the resource environment entries
## Context Configuration
envEntry = AdminResources.createResourceEnvEntries(node,server,provider, "Context Configuration", "test-config/context")
createResourceCustomProperty(envEntry, "deployment.environment", "IDE")
createResourceCustomProperty(envEntry, "server.context", "com.context.DG_WebSphereServerContext")
createResourceCustomProperty(envEntry, "user.context", "com.context.DG_WebSphereUserContext")
createResourceCustomProperty(envEntry, "log.directory", "C:/Development/WebSphere/Logs")
createResourceCustomProperty (envEntry, "file.directory", "C:/Development/WebSphere/AppFiles")
## Mail Configuration
envEntry = AdminResources.createResourceEnvEntries(node,server,provider, "Mail Configuration", "test-config/mail")
createResourceCustomProperty(envEntry, "enabled", "false")
createResourceCustomProperty(envEntry, "mailSessionJndiName", "mail/MailSession")
## User Repository Configuration
envEntry = AdminResources.createResourceEnvEntries(node, server, provider, "User Repository Configuration", "test-config/userRepository")
createResourceCustomProperty(envEntry, "ldap.provider.url", "ldap://test.com:389/cn=users,dc=com")
createResourceCustomProperty (envEntry, "ldap.security.principal", "cn=was_user,cn=users,dc=com")
# Save changes to the configuration
AdminConfig.save()

Starting with WAS 7, in addition to admin console and wsadmin, a third way to configure the server was introduced, namely properties file based configuration. This new administrative model supposedly "eliminates the need to write complex wsadmin scripts" as explained at related Education Assistant presentation.
What you do, basically, is configure a single environment, export the parts of configuration that are of interest to a portable properties file, and later use this file as an input to a single line of wsadmin script, which applies the configuration in the properties file to another target server. So you get rid of many lines of Jython and work with a much simpler artefact, which is a property file with a simple and familiar syntax.
In addition to above links there is a nice article about this feature at Developerworks.

Related

Bazel fetch remote file not as a WORKSPACE rule?

In Bazel, how do I fetch a remote file as a build rule not as a WORKSPACE rule?
I want to use a build rule because WORKSPACE rules are not loaded for transitively.
e.g. this fails
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
http_file(
name = "foo",
urls = [ "https://example.com" ],
sha256 = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
executable = True,
)
Error in repository_rule: 'repository rule http_file' can only be called during workspace loading
If you really want to do that, you have to implement your own rule, a naïve trivial example relying on curl to fetch could be:
def _impl(ctx):
args = ctx.actions.args()
args.add("-o", ctx.outputs.out)
args.add(ctx.attr.url)
ctx.actions.run(
outputs = [ctx.outputs.out],
executable = "curl",
arguments = [args],
)
get_stuff = rule(
_impl,
attrs = {
"url": attr.string(
mandatory = True,
),
},
outputs = {"out": "%{name}.out"},
)
But (and esp. in such a trivial) for, it comes with problems. Apart from, do you want to step out of sandbox during the build? And do you want to talk to someone across the network during the build (out of the sandbox)? Bypassing repository_cache, and possibly getting remote_cache involved (networked caching of networked fetching). Specifically in this example, if content of the file pointed to by url changes... build has no idea and only fetches it when it either hasn't done so or the url itself has changed. I.e. the implementation would need to be more robust (mimic that of http_file for instance).
But it actually sounds like you're trying to address a different problem (transitive external dependencies, for which there could be another solution). One trick used for that is to define a macro (in your first level dependency to load define the next hop) and after declaring that first step as an external dependency in your parent project, load the that macro and use it from parent project WORKSPACE. This too has a price though, namely the first level dependency has to always be present (fetched or already cached), even if build target asked for does not actually need it (as that load and macro call will always pull it in).

Jenkins: read an existing job's plugin config via Groovy

We are in the early phases of using Jenkins DSL. One challenge we have come across is being able to read in an existing jobs plugin settings so that we retain it before running the DSL. This allows us the ability to give the Jenkins users the option to keep some of their settings. We have successfully retained the schedule settings for our jobs but the newest challenge is being able to retain a plugin setting. Specifically a setting in the "ExtendedEmailPublisher" plugin. We would like to retain the value:
In the config.xml file for this job in the ExtendedEmailPublisher tags we see the following:
<publishers>
<hudson.plugins.emailext.ExtendedEmailPublisher>
<recipientList>Our_Team#Our_Team.com</recipientList>
<configuredTriggers>
<hudson.plugins.emailext.plugins.trigger.FailureTrigger>
<email>
<recipientList/>
<subject>$PROJECT_DEFAULT_SUBJECT</subject>
<body>$PROJECT_DEFAULT_CONTENT</body>
<recipientProviders>
<hudson.plugins.emailext.plugins.recipients.ListRecipientProvider/>
</recipientProviders>
<attachmentsPattern/>
<attachBuildLog>false</attachBuildLog>
<compressBuildLog>false</compressBuildLog>
<replyTo>$PROJECT_DEFAULT_REPLYTO</replyTo>
<contentType>project</contentType>
</email>
</hudson.plugins.emailext.plugins.trigger.FailureTrigger>
</configuredTriggers>
<contentType>default</contentType>
<defaultSubject>$DEFAULT_SUBJECT</defaultSubject>
<defaultContent>$DEFAULT_CONTENT</defaultContent>
<attachmentsPattern/>
<presendScript>$DEFAULT_PRESEND_SCRIPT</presendScript>
<classpath/>
<attachBuildLog>false</attachBuildLog>
<compressBuildLog>false</compressBuildLog>
<replyTo>$DEFAULT_REPLYTO</replyTo>
<saveOutput>false</saveOutput>
<disabled>false</disabled>
</hudson.plugins.emailext.ExtendedEmailPublisher>
</publishers>
The value we would like to extract/preserve from this XML is:
<disabled>false</disabled>
We have tried getting the existing values using groovy but cant seem to find the right code. Our first idea was to try to read the value from the config.xml using the XmlSlurper. We ran this from the Jenkins Script Console:
def projectXml = new XmlSlurper().parseText("curl http://Server_Name:8100/job/Job_Name/api/xml".execute().text);
*we use 8100 for our Jenkins port
Unfortunately while this does return some config info it does not return plugin info.
Then, we also tried running the following to read/replace the existing settings:
def oldJob = hudson.model.Hudson.instance.getItem("Job_Name")
def isDisabled = false // Default Value
for(publisher in oldJob.publishersList) {
if (publisher instanceof hudson.plugins.emailext.ExtendedEmailPublisher) {
isDisabled = publisher.disabled
}
}
And while this works if executed from the Jenkins Script Console, when we try to use it in a DSL Job we get the message:
Processing provided DSL script
ERROR: startup failed:
script: 25: unable to resolve class
hudson.plugins.emailext.ExtendedEmailPublisher
# line 25, column 37.
if (publisher instanceof hudson.plugins.emailext.ExtendedEmailPublisher)
{
1 error
Finished: FAILURE
SOLUTION UPDATE:
Using url #aflat's URL suggestion for getting the raw XML config info, I was able to use the XML Slurper and then use the getProperty method to assign the property I wanted to a variable.
def projectXml = new XmlSlurper().parseText("curl http://Server_Name:8100/job/Job_Name/config.xml".execute().text);
def emailDisabled = projectXml.publishers."hudson.plugins.emailext.ExtendedEmailPublisher".getProperty("disabled");
If you want to parse the config.xml, use
def projectXml = new XmlSlurper().parseText("curl http://Server_Name:8100/job/Job_Name/config.xml");
That should return your raw config.xml data
Under "Manage Jenkins->Configure Global Security" did you try disabling "Enable script security for Job DSL scripts"?

Are TFS Build Agent User Capabilities' Values Obtainable Within Build Steps?

I'm trying to write a build step within TFS that relies on knowing where the Build agent has nuget.exe stored (the standard nuget-install step mucks around with the order of arguments in a way that breaks build execution, so I want to run the exe myself using one of the batch/shell/ps steps).
It would seem that setting up a capability on the Build Agent with that path would make sense, but I cannot seem to reference the value in any of my build steps, and I cannot find anything helpful on MSDN.
I'm expecting it to be something like $(Env.MyUserCapability), but it never resolves to the value.
Is it possible to retrieve a capability value within a build step? And if so, how do you do it? And if not, what is a viable alternative?
The user-defined capabilities are metadata only. But you can set a global environment variable (e.g. NUGET) and set that to a path to a nuget.exe, when you restart the agent, the machine-wide environment is then discovered as capability and you can then use it.
If you are writing a custom task, you can also add a nuget.exe to the task that will be downloaded to the executing agent.
UPDATE: I made a public extension out of this.
UPDATE: this works in Azure DevOps 2019.
In TFS 2018u1, the following works:
Import-Module "Microsoft.TeamFoundation.DistributedTask.Task.Common"
Import-Module "Microsoft.TeamFoundation.DistributedTask.Task.Internal"
Add-Type -Assembly "Microsoft.TeamFoundation.DistributedTask.WebApi"
$VSS = Get-VssConnection -TaskContext $distributedTaskContext
$AgentCli = $VSS.GetClient([Microsoft.TeamFoundation.DistributedTask.WebApi.TaskAgentHttpClient])
$AgentConfig = Get-Content "$Env:AGENT_HOMEDIRECTORY\.agent" -Raw | ConvertFrom-Json
$Agent = $AgentCli.GetAgentAsync($AgentConfig.PoolId, $Env:AGENT_ID, $TRUE, $FALSE, $NULL, $NULL, [System.Threading.CancellationToken]::None).GetAwaiter().GetResult()
if($Agent.UserCapabilities.MyCapability)
{
Write-Host "Got the capability!";
}
The long string of default arguments ending with CancellationToken::None is for compatibility with Powershell 4. PS4 doesn't support default values for value-typed method parameters, PS5 does.
This snippet does something very questionable - it relies on the location and the structure of the agent configuration file. This is fragile. The problem is that the GetAgentAsync method requires both pool ID and the agent ID, and the former is not exposed in the environment variables. A slightly less hackish approach would check all pools and find the right one by the agent ID:
$Pools = $AgentCli.GetAgentPoolsAsync($NULL, $NULL, $NULL, $NULL, $NULL, [System.Threading.CancellationToken]::None).GetAwaiter().GetResult()
$Demands = New-Object 'System.Collections.Generic.List[string]'
foreach($Pool in $Pools)
{
$Agent = $AgentCli.GetAgentsAsync($Pool.ID, $Env:AGENT_NAME, $TRUE, $FALSE, $NULL, $Demands, $NULL, [System.Threading.CancellationToken]::None).Result
if($Agent -and $Agent.Id -eq $Env:AGENT_ID)
{
Break
}
}
This relies on another undocumented implementation detail, specifically that agent IDs are globally unique. This seems to hold as late as TFS 2018, but who knows.
When you employ the $distributedTaskContext, the task is connecting back to TFS with an artificial user identity, "Project Collection Build Service" (not with the agent service account). There's one user like that in each collection, they're distinct. In order to allow tasks running in releases in a collection to query the agent for user capabilities, you need to grant the Reader role to the relevant pool(s) (or all pools) to the user account called "Project Collection Build Service (TheCollectionName)" from that collection.
It also looks like some actions also grant an implicit Reader role on a pool to the task identity.
Alternatively, you can construct a VssConnection from scratch with Windows credentials, and grant the agent account(s) Reader role on the pool(s).

“Inject environment variables” Jenkins 2.0

I recently upgraded to Jenkins 2.0.
I’m trying to add a build step to a jenkins job of "Inject environment variables" along the lines of this SO post, but it’s not showing up as an option.
Is this not feature in Jenkins 2.0 (or has it always been a separate plugin)? Do I have to install another plugin, such as Envinject?
If you are using Jenkins 2.0
you can load the property file (which consists of all required Environment variables along with their corresponding values) and read all the environment variables listed there automatically and inject it into the Jenkins provided env entity.
Here is a method which performs the above stated action.
def loadProperties(path) {
properties = new Properties()
File propertiesFile = new File(path)
properties.load(propertiesFile.newDataInputStream())
Set<Object> keys = properties.keySet();
for(Object k:keys){
String key = (String)k;
String value =(String) properties.getProperty(key)
env."${key}" = "${value}"
}
}
To call this method we need to pass the path of property file as a string variable
For example, in our Jenkins file using groovy script we can call like
path = "${workspace}/pic_env_vars.properties"
loadProperties(path)
Please ask me if you have any doubt

Grails clustering quartz jobs sample code and config desired

I am using the quartz plugin with Grails 1.3.7. I have a need to load balance/cluster a server app that uses quartz jobs. Apparently this is supported but I am finding that all the google search results and links within documents are broken. I've found some raw Java examples but I would assume Grails has a more grailsy way to do this. All I need is a simple example to use as a template. I understand I need to somehow enable quartz to use JDBC to store the jobs and manage locking.
I think a link to a single sample would do it. But literally every time I've found something that looks promising it points to a broken link on terracotta's site. Pretty much every site eventually leads me here: http://www.opensymphony.com/quartz/wikidocs/TutorialLesson9.html but when I look on terracotta's site I see Java stuff but no grails. If Java is the only way to do this then so be it, but I feel like there has to be some grails expertise on this out there somewhere!
TIA.
To cluster the Quartz plugin in Grails, there are some files you need to include in your project. First, install the grails-app/conf/QuartzConfig.groovy and make sure jdbcStore is enabled.
quartz {
autoStartup = true
jdbcStore = true
waitForJobsToCompleteOnShutdown = true
}
Next, install the Hibernate configuration files relevant to the database to which you will be connecting. For example, with Oracle, the base Hibernate xml config at grails-app/conf/hibernate/hibernate.cfg.xml is:
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE hibernate-configuration PUBLIC
'-//Hibernate/Hibernate Configuration DTD 3.0//EN'
'http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd'>
<hibernate-configuration>
<session-factory>
<mapping resource="Quartz.oracle.hbm.xml"/>
</session-factory>
</hibernate-configuration>
The actual Quartz-Hibernate SQL file for this example will be named Quartz.oracle.hbm.xml and will reside in the same directory. These files should be available at the Quartz plugin on GitHub (https://github.com/nebolsin/grails-quartz), under src/templates/sql. Note, that these scripts only seem to work for DataSource create and create-drop, so you'll need to manually create the Quartz tables on an update, if they don't already exist from a previous run.
Create a grails-app/conf/quartz/quartz.properties file, and edit is to fit your business needs:
/* Have the scheduler id automatically generated for
* all schedulers in a cluster */
org.quartz.scheduler.instanceId = AUTO
/* Don't let Quartz "Phone Home" to see if new versions
* are available */
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
/* Configure Quartz for only one thread as the only job
* should run once per day */
org.quartz.threadPool.threadCount = 4
/* Give the thread a Thread.MIN_PRIORITY level*/
org.quartz.threadPool.threadPriority = 1
/* Allow a minute (60,000 ms) of non-firing to pass before
* a trigger is called a misfire */
org.quartz.jobStore.misfireThreshold = 60000
/* Handle only 2 misfired triggers at a time */
org.quartz.jobStore.maxMisfiresToHandleAtATime = 2
/* Check in with the cluster every 5000 ms*/
org.quartz.jobStore.clusterCheckinInterval = 5000
/* Use the Oracle Quartz Driver to communicate via JDBC */
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
/* Have Quartz handle its own transactions with the database */
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
/* Define the prefix for the Quartz tables in the database*/
org.quartz.jobStore.tablePrefix = QRTZ_
/* Tell Quartz it is clustered */
org.quartz.jobStore.isClustered = true
/* Tell Quartz that properties passed to the job call are
* NOT all String objects */
org.quartz.jobStore.useProperties = false
/* Detect the jvm shutdown and call shutdown on the scheduler */
org.quartz.plugin.shutdownhook.class = org.quartz.plugins.management.ShutdownHookPlugin
org.quartz.plugin.shutdownhook.cleanShutdown = true
/* Log the history of triggers and jobs */
org.quartz.plugin.triggerHistory.class = org.quartz.plugins.history.LoggingTriggerHistoryPlugin
org.quartz.plugin.jobHistory.class = org.quartz.plugins.history.LoggingJobHistoryPlugin
Note from the above properties, you can set org.quartz.plugins in the Log4j setup of Config.groovy to log relevant job and trigger triggering information. I think info level should suffice.
Edit, or create, scripts/_Events.groovy and add the following war modification closure. This fixes a known Quartz plugin bug to install the correct quartz.properties, instead of a blank one from the plugin, in to the final war file.
eventCreateWarStart = { warName, stagingDir ->
// Make sure we have the correct quartz.properties in the
// correct place in the war to enable clustering
ant.delete(dir:"${stagingDir}/WEB-INF/classes/quartz")
ant.copy(file:"${basedir}/grails-app/conf/quartz/quartz.properties",
todir:"${stagingDir}/WEB-INF/classes")
}
And you should be done...
P.S. If you are using an Oracle database, add the following to BuildConfig.groovy in the dependencies block, so that you have access to the Quartz-Oracle communication drivers:
runtime("org.quartz-scheduler:quartz-oracle:1.7.2") {
// Exclude quartz as 1.7.3 is included from the plugin
excludes('quartz')
}
P.P.S The sql files at the link above are just the SQL. To make it in to a hibernate file, just surround each individual SQL command with a Hibernate database-object node, like so (again w/ Oracle example):
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE hibernate-mapping PUBLIC
'-//Hibernate/Hibernate Mapping DTD 3.0//EN'
'http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd'>
<hibernate-mapping>
<database-object>
<create>
CREATE TABLE QRTZ_JOB_DETAILS (
JOB_NAME VARCHAR2(200) NOT NULL,
JOB_GROUP VARCHAR2(200) NOT NULL,
DESCRIPTION VARCHAR2(250) NULL,
JOB_CLASS_NAME VARCHAR2(250) NOT NULL,
IS_DURABLE VARCHAR2(1) NOT NULL,
IS_VOLATILE VARCHAR2(1) NOT NULL,
IS_STATEFUL VARCHAR2(1) NOT NULL,
REQUESTS_RECOVERY VARCHAR2(1) NOT NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (JOB_NAME,JOB_GROUP)
)
</create>
<drop>DROP TABLE QRTZ_JOB_DETAILS</drop>
<dialect-scope name='org.hibernate.SomeOracleDialect' />
</database-object>
...
<database-object>
<create>INSERT INTO QRTZ_LOCKS VALUES('TRIGGER_ACCESS')</create>
<drop></drop>
<dialect-scope name='org.hibernate.SomeOracleDialect' />
</database-object>
...
</hibernate-mapping>
The dialect-scope tells Hibernate with which Database dialects the create and drop nodes should be used. You can try leaving it out and see if it works, otherwise you may have to add the MySql dialect used by your Grails DataSource.

Resources