Grails clustering quartz jobs sample code and config desired - grails

I am using the quartz plugin with Grails 1.3.7. I have a need to load balance/cluster a server app that uses quartz jobs. Apparently this is supported but I am finding that all the google search results and links within documents are broken. I've found some raw Java examples but I would assume Grails has a more grailsy way to do this. All I need is a simple example to use as a template. I understand I need to somehow enable quartz to use JDBC to store the jobs and manage locking.
I think a link to a single sample would do it. But literally every time I've found something that looks promising it points to a broken link on terracotta's site. Pretty much every site eventually leads me here: http://www.opensymphony.com/quartz/wikidocs/TutorialLesson9.html but when I look on terracotta's site I see Java stuff but no grails. If Java is the only way to do this then so be it, but I feel like there has to be some grails expertise on this out there somewhere!
TIA.

To cluster the Quartz plugin in Grails, there are some files you need to include in your project. First, install the grails-app/conf/QuartzConfig.groovy and make sure jdbcStore is enabled.
quartz {
autoStartup = true
jdbcStore = true
waitForJobsToCompleteOnShutdown = true
}
Next, install the Hibernate configuration files relevant to the database to which you will be connecting. For example, with Oracle, the base Hibernate xml config at grails-app/conf/hibernate/hibernate.cfg.xml is:
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE hibernate-configuration PUBLIC
'-//Hibernate/Hibernate Configuration DTD 3.0//EN'
'http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd'>
<hibernate-configuration>
<session-factory>
<mapping resource="Quartz.oracle.hbm.xml"/>
</session-factory>
</hibernate-configuration>
The actual Quartz-Hibernate SQL file for this example will be named Quartz.oracle.hbm.xml and will reside in the same directory. These files should be available at the Quartz plugin on GitHub (https://github.com/nebolsin/grails-quartz), under src/templates/sql. Note, that these scripts only seem to work for DataSource create and create-drop, so you'll need to manually create the Quartz tables on an update, if they don't already exist from a previous run.
Create a grails-app/conf/quartz/quartz.properties file, and edit is to fit your business needs:
/* Have the scheduler id automatically generated for
* all schedulers in a cluster */
org.quartz.scheduler.instanceId = AUTO
/* Don't let Quartz "Phone Home" to see if new versions
* are available */
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
/* Configure Quartz for only one thread as the only job
* should run once per day */
org.quartz.threadPool.threadCount = 4
/* Give the thread a Thread.MIN_PRIORITY level*/
org.quartz.threadPool.threadPriority = 1
/* Allow a minute (60,000 ms) of non-firing to pass before
* a trigger is called a misfire */
org.quartz.jobStore.misfireThreshold = 60000
/* Handle only 2 misfired triggers at a time */
org.quartz.jobStore.maxMisfiresToHandleAtATime = 2
/* Check in with the cluster every 5000 ms*/
org.quartz.jobStore.clusterCheckinInterval = 5000
/* Use the Oracle Quartz Driver to communicate via JDBC */
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
/* Have Quartz handle its own transactions with the database */
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
/* Define the prefix for the Quartz tables in the database*/
org.quartz.jobStore.tablePrefix = QRTZ_
/* Tell Quartz it is clustered */
org.quartz.jobStore.isClustered = true
/* Tell Quartz that properties passed to the job call are
* NOT all String objects */
org.quartz.jobStore.useProperties = false
/* Detect the jvm shutdown and call shutdown on the scheduler */
org.quartz.plugin.shutdownhook.class = org.quartz.plugins.management.ShutdownHookPlugin
org.quartz.plugin.shutdownhook.cleanShutdown = true
/* Log the history of triggers and jobs */
org.quartz.plugin.triggerHistory.class = org.quartz.plugins.history.LoggingTriggerHistoryPlugin
org.quartz.plugin.jobHistory.class = org.quartz.plugins.history.LoggingJobHistoryPlugin
Note from the above properties, you can set org.quartz.plugins in the Log4j setup of Config.groovy to log relevant job and trigger triggering information. I think info level should suffice.
Edit, or create, scripts/_Events.groovy and add the following war modification closure. This fixes a known Quartz plugin bug to install the correct quartz.properties, instead of a blank one from the plugin, in to the final war file.
eventCreateWarStart = { warName, stagingDir ->
// Make sure we have the correct quartz.properties in the
// correct place in the war to enable clustering
ant.delete(dir:"${stagingDir}/WEB-INF/classes/quartz")
ant.copy(file:"${basedir}/grails-app/conf/quartz/quartz.properties",
todir:"${stagingDir}/WEB-INF/classes")
}
And you should be done...
P.S. If you are using an Oracle database, add the following to BuildConfig.groovy in the dependencies block, so that you have access to the Quartz-Oracle communication drivers:
runtime("org.quartz-scheduler:quartz-oracle:1.7.2") {
// Exclude quartz as 1.7.3 is included from the plugin
excludes('quartz')
}
P.P.S The sql files at the link above are just the SQL. To make it in to a hibernate file, just surround each individual SQL command with a Hibernate database-object node, like so (again w/ Oracle example):
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE hibernate-mapping PUBLIC
'-//Hibernate/Hibernate Mapping DTD 3.0//EN'
'http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd'>
<hibernate-mapping>
<database-object>
<create>
CREATE TABLE QRTZ_JOB_DETAILS (
JOB_NAME VARCHAR2(200) NOT NULL,
JOB_GROUP VARCHAR2(200) NOT NULL,
DESCRIPTION VARCHAR2(250) NULL,
JOB_CLASS_NAME VARCHAR2(250) NOT NULL,
IS_DURABLE VARCHAR2(1) NOT NULL,
IS_VOLATILE VARCHAR2(1) NOT NULL,
IS_STATEFUL VARCHAR2(1) NOT NULL,
REQUESTS_RECOVERY VARCHAR2(1) NOT NULL,
JOB_DATA BLOB NULL,
PRIMARY KEY (JOB_NAME,JOB_GROUP)
)
</create>
<drop>DROP TABLE QRTZ_JOB_DETAILS</drop>
<dialect-scope name='org.hibernate.SomeOracleDialect' />
</database-object>
...
<database-object>
<create>INSERT INTO QRTZ_LOCKS VALUES('TRIGGER_ACCESS')</create>
<drop></drop>
<dialect-scope name='org.hibernate.SomeOracleDialect' />
</database-object>
...
</hibernate-mapping>
The dialect-scope tells Hibernate with which Database dialects the create and drop nodes should be used. You can try leaving it out and see if it works, otherwise you may have to add the MySql dialect used by your Grails DataSource.

Related

How to override logging in dataflow with my logback.xml file?

We are trying to use our logback.xml that we use in GCP Cloud run which has amazing filtering features. Our logback.xml contains this for cloud run
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="com.orderlyhealth.api.logging.logback.GCPCloudLoggingJSONLayout">
<pattern>${CONSOLE_PATTERN}</pattern>
</layout>
</encoder>
</appender>
And our GCPCloudLoggingJSONLayout does a great job at setting all the things we need like clientId, customerRequestId, etc. etc. and we can filter across many many microservices on one customer or one customer request. We lose this in dataflow currently though. We tried adding logback.xml to src/main/resources and deploying the project seems to use it in the shell like so
{"message":"[main][-][:] o.a.b.r.d.DataflowRunner Template successfully created.\n",
"logger":"org.apache.beam.runners.dataflow.DataflowRunner",
"transactionId":null,"socket":null,"clntSocket":null,
"version":null,
"timestamp":{"seconds":1619694798,"nanos":4000000},
"thread":"main",
"severity":"INFO",
"instanceId":null,
"headers":{},
"messageInfo":{"message":"Message short enough. Displayed top level"}
}
thanks for any ideas on modifying dataflow logging.
Currently we see this instead which is not nearly as useful for tracing the customer request through systems
I don't think you can change how Dataflow logs to Cloud logging.
Instead, you can change how/what you log and let Dataflow pass them through to cloud logging. See Logging pipeline messages.
Or you can use cloud logging client libraries in your pipeline directly: https://cloud.google.com/logging/docs/reference/libraries.
Please take a look at How to override Google DataFlow logging with logback? for the latest version of this answer
I copied the current answer there to make it easier for folks who want to look:
Dataflow relies on using java.util.logging (aka JUL) as the logging backend for SLF4J and adds various bridges ensuring that logs from other libraries are output as well. With this kind of setup, we are limited to adding any additional details to the log message itself only.
This also applies to any runner executing a portable job since the container with the SDK harness has a similar logging configuration. For example Dataflow Runner V2.
To do this we want to create a custom formatter to apply to the root JUL logger. For example:
public class CustomFormatter extends SimpleFormatter {
public String formatMessage(LogRecord record) {
// implement whatever logic the is needed to add details to the message portion of the log statement
return super.formatMessage(record);
}
}
And then during start-up of the worker we need to update the root logger to use this formatter. We can achieve this using a JvmInitializer and implement the beforeProcessing method like so:
#AutoService(JvmInitializer.class)
public class LoggerInitializer implements JvmInitializer {
public void beforeProcessing(PipelineOptions options) {
LogManager logManager = LogManager.getLogManager();
Logger rootLogger = logManager.getLogger("");
for (Handler handler : rootLogger.getHandlers()) {
handler.setFormatter(new CustomFormatter());
}
}
}

Select a different runner for cucumber.api.cli.Main?

Is it possible to define/specify a runner when starting tests from cucumber's command line(cucumber.api.cli.Main)?
My reason for this is so i can generate xml reports in Jenkins and push the results to ALM Octane.
I kind of inherited this project and its using gradle to do a javaexect and call cucumber.api.cli.Main
I know its possible to do this with #RunWith(OctaneCucumber.class) when using JUnit runner + maven (or only JUnit runner), otherwise that tag is ignored. I have the custom runner with that tag but when i run from cucumber.api.cli.Main i can't find a way to run with it and my tag just gets ignored.
What #Grasshopper suggested didn't exactly work but it made me look in the right direction.
Instead of adding the code as a plugin, i managed to "hack/load" the octane reporter by creating a copy of the cucumber.api.cli.Main, using it as a base to run the cli commands and change a bit the run method and add the plugin at runtime. Needed to do this because the plugin required quite a few parameters in its constructor. Might not be the perfect solution, but it allowed me to keep the gradle build process i initially had.
public static byte run(String[] argv, ClassLoader classLoader) throws IOException {
RuntimeOptions runtimeOptions = new RuntimeOptions(new ArrayList<String>(asList(argv)));
ResourceLoader resourceLoader = new MultiLoader(classLoader);
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
Runtime runtime = new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
//====================Added the following lines ================
//Hardcoded runner(?) class. If its changed, it will need to be changed here also
OutputFile outputFile = new OutputFile(Main.class);
runtimeOptions.addPlugin(new HPEAlmOctaneGherkinFormatter(resourceLoader, runtimeOptions.getFeaturePaths(), outputFile));
//==============================================================
runtime.run();
return runtime.exitStatus();
}

grails 3 - Update grails config during Plugin "doWithApplicationContext"

I've got a plugin that, during startup, reads some properties from the applications Config file, creates some domain objects and then needs to update the configuration with some additional information. However, it seems that the config object available during doWithApplicationContext is not the actual grailsApplication.config object.
For instance, attempting to do something straightforward in the MyPluginGrailsPlugin.groovy file like:
void doWithApplicationContext() {
grailsApplication.config.put('test', 'testValue')
}
does not update the config.
If this plugin is incldued in an application, at any point after startup, grailsApplication.config.getProperty('test') will return null.
How does one go about updating the config map during plugin startup?
NOTE: In grails 2, this used to work.
With this code snippet in MyPluginGrailsPlugin.groovy's doWithApplicationContext, new properties were successfully added into the application's config object.
ConfigObject myConfigObject = new ConfigSlurper().parse(props)
PropertySource propertySource = new MapPropertySource('grails.plugins.myPlugin', [:] << myConfigObject)
def propertySources = grailsApplication.mainContext.environment.propertySources
propertySources.addFirst propertySource
As an additional note: in doWithApplicationContext in my plugin, changing the config object like this worked in Grails 2 and no longer works in Grails 3.
grailsApplication.config.merge(myConfigObject)
grailsApplication.configChanged()

Why doesn't the default attribute for number fields work for Jenkins jelly configurations?

I'm working on a Jenkins plugin where we make a call out to a remote service using Spring's RestTemplate. To configure the timeout values, I'm setting up some fields in the global configuration using the global.jelly file for Jenkins plugins using a number field as shown here:
<f:entry title="Read Timeout" field="readTimeout" description="Read timeout in ms.">
<f:number default="3000"/>
</f:entry>
Now, this works to save the values and retrieve the values no problem, so it looks like everything is setup correctly for my BuildStepDescriptor. However, when I first install the update to a Jenkins instance, instead of getting 3000 in the field by default as I would expect, instead I am getting 0. This is the same for all the fields that I'm using.
Given that the Jelly tag reference library says this attribute should be the default value, why do I keep seeing 0 when I first install the plugin?
Is there some more Java code that needs to be added to my plugin to tie the default in Jelly back to the global configuration?
I would think that when Jenkins starts, it goes to get the plugin configuration XML and fails to find a value and sets it to a default of 0.
I have got round this in the past by setting a default in the descriptor (in groovy) then this value will be saved into the global config the first time in and also be available if the user never visits the config page.
#Extension
static class DescriptorImpl extends AxisDescriptor {
final String displayName = 'Selenium Capability Axis'
String server = 'http://localhost:4444'
Boolean sauceLabs = false
String sauceLabsName
Secret sauceLabsPwd
String sauceLabsAPIURL =
'http://saucelabs.com/rest/v1/info/platforms/webdriver'
String sauceLabsURL = 'http://ondemand.saucelabs.com:80'
from here

WebSphere configuration with ant & jython

I have been able to successfully configure resource environment entries with the Jython script below. I call the Jython script with the ws_admin program in my local app servers bin directory.
I work on a team where ant is the preferred technology in our build process.
I've looked around the web for documentation on configuring WebSphere with ant and so far it looks like to me that one is mainly able to call programs like ws_admin from ant.
Is it possible to configure resource environment entries using ant directly instead of using a Jython or Jacl script? If not, how can I go about setting up an ant task to reduce the amount of Jython that is needed to set up resource environment entries?
Here's my current Jython script that sets up resource environment entries. Ultimately looking for ways to reduce our dependence on Jython...
# Set up Variables used within this script
objServerAttrs = AdminControl.completeObjectName('WebSphere:type=Server,*')
node = AdminControl.getAttribute(objServerAttrs, 'nodeName')
server = AdminControl.getAttribute(objServerAttrs, 'name')
provider = "Test_ConfigurationProvider"
providerFactory = "com.DG_ConfigurationFactory"
providerClass = "com.DG_Configuration"
# Function for creating resource custom properties
def createResourceCustomProperty(envEntry, propName, propValue):
propSet = AdminConfig.showAttribute(envEntry, 'propertySet')
if propSet == None:
propSet = AdminConfig.create('J2EEResourcePropertySet',envEntry,[])
name = ['name', propName]
value = ['value', propValue]
propAttrs = [name, value]
AdminConfig.create('J2EEResourceProperty', propSet, propAttrs)
return
# Create the resource environment provider
AdminResources.createResourceEnvProvider(node, server, provider)
AdminResources.createResourceEnvProviderRef(node,server,provider, providerFactory, providerClass)
# Create the resource environment entries
## Context Configuration
envEntry = AdminResources.createResourceEnvEntries(node,server,provider, "Context Configuration", "test-config/context")
createResourceCustomProperty(envEntry, "deployment.environment", "IDE")
createResourceCustomProperty(envEntry, "server.context", "com.context.DG_WebSphereServerContext")
createResourceCustomProperty(envEntry, "user.context", "com.context.DG_WebSphereUserContext")
createResourceCustomProperty(envEntry, "log.directory", "C:/Development/WebSphere/Logs")
createResourceCustomProperty (envEntry, "file.directory", "C:/Development/WebSphere/AppFiles")
## Mail Configuration
envEntry = AdminResources.createResourceEnvEntries(node,server,provider, "Mail Configuration", "test-config/mail")
createResourceCustomProperty(envEntry, "enabled", "false")
createResourceCustomProperty(envEntry, "mailSessionJndiName", "mail/MailSession")
## User Repository Configuration
envEntry = AdminResources.createResourceEnvEntries(node, server, provider, "User Repository Configuration", "test-config/userRepository")
createResourceCustomProperty(envEntry, "ldap.provider.url", "ldap://test.com:389/cn=users,dc=com")
createResourceCustomProperty (envEntry, "ldap.security.principal", "cn=was_user,cn=users,dc=com")
# Save changes to the configuration
AdminConfig.save()
Starting with WAS 7, in addition to admin console and wsadmin, a third way to configure the server was introduced, namely properties file based configuration. This new administrative model supposedly "eliminates the need to write complex wsadmin scripts" as explained at related Education Assistant presentation.
What you do, basically, is configure a single environment, export the parts of configuration that are of interest to a portable properties file, and later use this file as an input to a single line of wsadmin script, which applies the configuration in the properties file to another target server. So you get rid of many lines of Jython and work with a much simpler artefact, which is a property file with a simple and familiar syntax.
In addition to above links there is a nice article about this feature at Developerworks.

Resources