I want to use dynamic scheduling feature of Grails quartz plugin.
I am running grails 2.3.5 and the quartz plugin (quartz:1.0.2).
I am able to persist the quartz information to my mysql database and I am able to run normal quartz Jobs.
The problem is scheduling tasks dynamically. I am not getting this to work.
Here is my setup and what I am trying to do:
I have a simple Job in "grails-app/tao/marketing/MarketingJob" which looks like this:
package tao.marketing
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
class MarketingJob {
static triggers ={}
def execute(JobExecutionContext context) {
try{
def today = new Date()
println today
}
catch (Throwable e) {
throw new JobExecutionException(e.getMessage(), e);
}
}
}
Which I now try to schedule dynamically from a Service.
package tao
import grails.transaction.Transactional
import tao.marketing.CampaignSchedule
import tao.Person
import jobs.tao.marketing.*
class ScheduleService {
def scheduleMarketingForPerson(CampaignSchedule campaignSchedule, Person person) {
log.info("Schedule new Marketing for: "+person.last_name)
campaignSchedule.scheduleActions.each {
Date today = new Date();
Date scheduleDate = today+it.afterXdays
log.info("ScheduleAction: "+it.id+": "+scheduleDate)
MarketingJob.schedule(scheduleDate, ["scheduleActions.id":it.id, "person.apiKey":person.apiKey])
}
}
}
In my IDE (STS) MarketingJob cannot be found.
MarketingJob.schedule(scheduleDate, ["scheduleActions.id":it.id, "person.apiKey":person.apiKey])
How do I correctly import the Marking Job?
Do I understand the dynamic scheduling feature correctly?
Could be that your job is in "package tao.marketing" and your import is "import jobs.tao.marketing.*"? I mean, import starts with "jobs"
The problem I had was that in my STS IDE I didn't have the jobs directory marked as a code directory. Thanks for all your comments.
Related
I am learning to write the groovy script to configure matrix authorization plugin. I have written this script where only authenticated users can access Jenkins:
import jenkins.model.*
import hudson.security.*
import com.cloudbees.hudson.plugins.folder.properties.AuthorizationMatrixProperty
try {
def instance = Jenkins.getInstance()
def realm = new HudsonPrivateSecurityRealm(false)
instance.setSecurityRealm(realm)
def strategy = new hudson.security.GlobalMatrixAuthorizationStrategy()
strategy.add(Jenkins.ADMINISTER, 'authenticated')
instance.setAuthorizationStrategy(strategy)
instance.save()
}
catch(Throwable exc) {
println '!!! Error configuring jenkins'
org.codehaus.groovy.runtime.StackTraceUtils.sanitize(new Exception(exc)).printStackTrace()
println '!!! Shutting down Jenkins to prevent possible mis-configuration from going live'
jenkins.cleanUp()
System.exit(1)
}
Now, I want to configure this matrix plugin in a way that nobody can access the Jenkins settings area(even authenticated users can not access the Jenkins settings). I have done lot of research on that and not able to move forward with this. Any help/pointer will be appreciated. Thanks!
I found an answer to that. Below is the complete code for the above requirement where I was missing Jenkins.READ.
import jenkins.model.*
import hudson.security.*
import com.cloudbees.hudson.plugins.folder.properties.AuthorizationMatrixProperty
try {
def instance = Jenkins.getInstance()
def realm = new HudsonPrivateSecurityRealm(false)
instance.setSecurityRealm(realm)
def strategy = new hudson.security.GlobalMatrixAuthorizationStrategy()
strategy.add(Jenkins.READ, 'authenticated')
instance.setAuthorizationStrategy(strategy)
instance.save()
}
catch(Throwable exc) {
println '!!! Error configuring jenkins'
org.codehaus.groovy.runtime.StackTraceUtils.sanitize(new
Exception(exc)).printStackTrace()
println '!!! Shutting down Jenkins to prevent possible mis-configuration from going live'
jenkins.cleanUp()
System.exit(1)
}
According to the Beam website,
Often it is faster and simpler to perform local unit testing on your
pipeline code than to debug a pipeline’s remote execution.
I want to use test-driven development for my Beam/Dataflow app that writes to Bigtable for this reason.
However, following the Beam testing documentation I get to an impasse--PAssert isn't useful because the output PCollection contains org.apache.hadoop.hbase.client.Put objects, which don't override the equals method.
I can't get the contents of the PCollection to do validation on them either, since
It is not possible to get the contents of a PCollection directly - an
Apache Beam or Dataflow pipeline is more like a query plan of what
processing should be done, with PCollection being a logical
intermediate node in the plan, rather than containing the data.
So how can I test this pipeline, other than manually running it? I'm using Maven and JUnit (in Java since that's all the Dataflow Bigtable Connector seems to support).
The Bigtable Emulator Maven plugin can be used to write integration tests for this:
Configure the Maven Failsafe plugin and change your test case's ending from *Test to *IT to run as an integration test.
Install the Bigtable Emulator in the gcloud sdk on command line:
gcloud components install bigtable
Note that this required step is going to reduce code portability (e.g. will it run on your build system? On other devs' machines?) so I'm going to containerize it using Docker before deploying to the build system.
Add the emulator plugin to the pom per the README
Use the HBase Client API and see the example Bigtable Emulator integration test to set up your session and table(s).
Write your test as normal per the Beam documentation, except instead of using PAssert actually call CloudBigtableIO.writeToTable and then use the HBase Client to read the data from the table to verify it.
Here's an example integration test:
package adair.example;
import static org.apache.hadoop.hbase.util.Bytes.toBytes;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.UUID;
import java.util.stream.Collectors;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.Create;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.Mutation;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
import org.hamcrest.collection.IsIterableContainingInAnyOrder;
import org.junit.Assert;
import org.junit.Test;
import com.google.cloud.bigtable.beam.CloudBigtableIO;
import com.google.cloud.bigtable.beam.CloudBigtableTableConfiguration;
import com.google.cloud.bigtable.hbase.BigtableConfiguration;
/**
* A simple integration test example for use with the Bigtable Emulator maven plugin.
*/
public class DataflowWriteExampleIT {
private static final String PROJECT_ID = "fake";
private static final String INSTANCE_ID = "fakeinstance";
private static final String TABLE_ID = "example_table";
private static final String COLUMN_FAMILY = "cf";
private static final String COLUMN_QUALIFIER = "cq";
private static final CloudBigtableTableConfiguration TABLE_CONFIG =
new CloudBigtableTableConfiguration.Builder()
.withProjectId(PROJECT_ID)
.withInstanceId(INSTANCE_ID)
.withTableId(TABLE_ID)
.build();
public static final List<String> VALUES_TO_PUT = Arrays
.asList("hello", "world", "introducing", "Bigtable", "plus", "Dataflow", "IT");
#Test
public void testPipelineWrite() throws IOException {
try (Connection connection = BigtableConfiguration.connect(PROJECT_ID, INSTANCE_ID)) {
Admin admin = connection.getAdmin();
createTable(admin);
List<Mutation> puts = createTestPuts();
//Use Dataflow to write the data--this is where you'd call the pipeline you want to test.
Pipeline p = Pipeline.create();
p.apply(Create.of(puts)).apply(CloudBigtableIO.writeToTable(TABLE_CONFIG));
p.run().waitUntilFinish();
//Read the data from the table using the regular hbase api for validation
ResultScanner scanner = getTableScanner(connection);
List<String> resultValues = new ArrayList<>();
for (Result row : scanner) {
String cellValue = getRowValue(row);
System.out.println("Found value in table: " + cellValue);
resultValues.add(cellValue);
}
Assert.assertThat(resultValues,
IsIterableContainingInAnyOrder.containsInAnyOrder(VALUES_TO_PUT.toArray()));
}
}
private void createTable(Admin admin) throws IOException {
HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(TABLE_ID));
tableDesc.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
admin.createTable(tableDesc);
}
private ResultScanner getTableScanner(Connection connection) throws IOException {
Scan scan = new Scan();
Table table = connection.getTable(TableName.valueOf(TABLE_ID));
return table.getScanner(scan);
}
private String getRowValue(Result row) {
return Bytes.toString(row.getValue(toBytes(COLUMN_FAMILY), toBytes(COLUMN_QUALIFIER)));
}
private List<Mutation> createTestPuts() {
return VALUES_TO_PUT
.stream()
.map(this::stringToPut)
.collect(Collectors.toList());
}
private Mutation stringToPut(String cellValue){
String key = UUID.randomUUID().toString();
Put put = new Put(toBytes(key));
put.addColumn(toBytes(COLUMN_FAMILY), toBytes(COLUMN_QUALIFIER), toBytes(cellValue));
return put;
}
}
In Google Cloud you can do e2e testing of your Dataflow pipeline easily using real cloud resources like Pub/Sub topic and BigQuery tables.
By using Junit5 Extension Model (https://junit.org/junit5/docs/current/user-guide/#extensions) you can create custom classes that will handle the creation and deletion of the required resources for your pipeline.
You can find a demo/seed project here https://github.com/gabihodoroaga/dataflow-e2e-demo and a blog post here https://hodo.dev/posts/post-31-gcp-dataflow-e2e-tests/.
For the already available multi-job in jenkins, need to add new phase jobs using Groovy Scripting. I have written the following groovy code which adds up an already existing job p25_deploy-1.
This code is working to create the multi-job but the phase job is not showing as mapped in the Jenkins UI. Where as if I see it config.xml, its created properly as expected except a tag <killPhaseOnJobResultCondition>. Not sure why the phase job is not mapped properly?
import jenkins.model.*
import hudson.model.*
import com.tikal.jenkins.plugins.multijob.*
import com.tikal.jenkins.plugins.multijob.PhaseJobsConfig.*
import com.tikal.jenkins.plugins.multijob.PhaseJobsConfig.KillPhaseOnJobResultCondition.*
import java.lang.String.*
import hudson.model.Descriptor;
import hudson.tasks.Builder;
def jenkinsInstance = jenkins.model.Jenkins.instance
def templateJobName = 'profile_p25'
def templateJob = jenkinsInstance.getJob(templateJobName)
// get MultiJob BuildPhases and clone each PhaseJob
builders = templateJob.getBuilders();
builders.each { b ->
if (b instanceof MultiJobBuilder){
def pj = b.getPhaseJobs()
hudson.model.Describable p1 = new PhaseJobsConfig("p25_deploy-1",null,
true,PhaseJobsConfig.KillPhaseOnJobResultCondition NEVER,null,false,false,null,0,false,true,null,false,false)
pj.add(p1)
}
}
templateJob.save()
// update dependencies
jenkinsInstance.rebuildDependencyGraph()
Any help will be really appreciated. Have tried many ways but was not able to figure out the problem with the script.
We can use DSL to create but I wanted it to be done in Groovy Scripting and moreover modify the existing job.
Blockquote
Yay! I am back with the answer for my question. Have tried this since very long time. Finally am able to make it though. I was aware that solution would be really simple but not able to figure out the hack of it.
import jenkins.model.*
import hudson.model.*
import com.tikal.jenkins.plugins.multijob.*
import com.tikal.jenkins.plugins.multijob.PhaseJobsConfig.*
import com.tikal.jenkins.plugins.multijob.PhaseJobsConfig.KillPhaseOnJobResultCondition.*
import java.lang.String.*
import hudson.model.Descriptor
import hudson.tasks.Builder
def jenkinsInstance = jenkins.model.Jenkins.instance
def templateJobName = 'profile_p25'
def templateJob = jenkinsInstance.getJob(templateJobName)
// get MultiJob BuildPhases and clone each PhaseJob
builders = templateJob.getBuilders();
builders.each { b -> if (b instanceof MultiJobBuilder)
{ def pj =
b.getPhaseJobs()
hudson.model.Describable newphase = new
PhaseJobsConfig(deploys[i],null,
true,null,null,false,false,null,0,false,false,"",false,false)
newphase.killPhaseOnJobResultCondition = 'NEVER'
pj.add(newphase)
}
}
templateJob.save()
I am using neo4j in embedded mode. So for some operations in database on server, i am tying to execute groovy script. Groovy script is running successfully without any error,but it is not creating any new record when i am checking neo4j-communinty tool.
Script
/**
* Created by prabjot on 7/1/17.
*/
#Grab(group="org.neo4j", module="neo4j-kernel", version="2.3.6")
#Grab(group="org.neo4j", module="neo4j-lucene-index", version="2.3.6")
#Grab(group='org.neo4j', module='neo4j-shell', version='2.3.6')
#Grab(group='org.neo4j', module='neo4j-cypher', version='2.3.6')
import org.neo4j.graphdb.factory.GraphDatabaseFactory
import org.neo4j.graphdb.Node
import org.neo4j.graphdb.Result
import org.neo4j.graphdb.Transaction
class Neo4jEmbeddedAccess {
public static void main(String[] args) {
def map=[:]
map.put("allow_store_upgrade","true")
map.put("remote_shell_enabled","true")
def db = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder("/opt/neo4j-community-3.0.4/data/databases/graph.db")
.setConfig(map)
.newGraphDatabase()
Transaction tx =db.beginTx()
Node person = db.createNode();
person.setProperty("name","prabjot")
print("id---->" + person.id);
Result result = db.execute("Match (country:Country) where id(country)=73 SET country.modified=true return country")
print(result)
tx.success();
println """starting embedded graph db
use bin/neo4j-shell from a new distribution to connect
we're keeping the graphdb open for 120 secs"""
db.shutdown()
}
Please help what i am doing wrong here, i have checked my db location but is same as i am using in script and tool.
Thanks
You forgot tx.close() which commits the Transaction
Sucess only marks it as successful
This is grails-app/conf/Config.groovy from which i am trying to access
variables in Quartz Job. This is the variable i am trying to access in the quartz job below:
ais.mediquery.TrialVariable="My Variable";
Quartz Job :
import com.projectname.*;
import org.codehaus.groovy.grails.commons.GrailsApplication;
import grails.util.Holders
class TrialJob {
GrailsApplication grailsApplication;
static triggers = {
simple repeatInterval: 10000l // execute job once in 5 seconds
}
def execute() {
//log.info(grailsApplication.ais.mediquery.TrialVariable)
println (Holders.config.ais.mediquery.TrialVariable)
}
}
I have tried using both GrailsApplication and ais.mediquery.TrialVariable
but none of them seem to access variables and print them
The config is definitely available from GrailsApplication and that's the best option:
println grailsApplication.config.ais.mediquery.TrialVariable
If that doesn't seem to work, try printing the whole config and see if you have a typo:
println grailsApplication.config
or
println grailsApplication.config.flatten()
As with any behavior that seems strange, run grails clean and grails compile to force a clean compile.