Adding a New Quartz.NET Job - quartz.net

I've inherited an application which uses Quartz.NET
I have no idea how to maintain/use this and need to add a new Job.
I created a new Job class and added it to the jobs xml file as an element under <schedule>.
Will this automatically add the appropriate row to the CRON_TRIGGERS table?
Or is there some other step?
Or do I need to manually insert a row into the CRON_TRIGGERS table?
Thanks

You can create a new job by using something like
var jobBuilder = new JobBuilder.Create<IJob>()
.SetJobDataMap(jobDataMap)
.Build();
IJob will be a class that will be derived from the IJob interface. The JobDataMap can be instantiated with a dictionary with the given data. You can retrieve the data from the IJob Execute method with something like IJobExecutionContext.JobDetail.JobDataMap["aKeyInYourDictionary"]
Now you'll have to set a trigger to run the job every x milliseconds.
var triggerBuilder = new TriggerBuilder.Create().StartNow().WithSimpleSchedule(x => x.WithInterval(timeInMilliSeconds).RepeatForever());
Finally use the IScheduler.ScheduleJob(jobBuilder, triggerBuilder) to schedule the job.

Related

Can we Read Config file(managed files .properties file) in jenkins active choice parameters?

I want to Read My properties File in active choice parameters grovy script , my properties file is stored in manged files .
Properties file look like these
[1]: https://i.stack.imgur.com/flvP5.png
I want to call these properties file in Active Choice Reference Parameter Groovy script and retrive
all the values as per my previous selection.I had been using differnt not able to retrive values is their any way that we could retrive values?
The properties file would like this 'cat /var/jenkins_home/workspace/ss.properties'
100.1.1.1=outside_in,wireless_in
200.x.x.x=mgmt_in,inside_in
Create a Parameter called "Active Choices Parameter", call it "hostnames" and write the following groovy script there. In the following groovy script, we are simply reading the keys from the property file, converting it into a list and populating the same in a choice parameter. The choice type for this in my case is single select or you can set it according to your needs.
Properties properties = new Properties()
File propertiesFile = new File('/var/jenkins_home/workspace/ss.properties')
def stream = propertiesFile.newDataInputStream()
properties.load(stream)
Enumeration e = properties.propertyNames();
List<String> list = Collections.list(e);
return list
Create another Parameter called "Active Choices Reactive Parameter" and call it "config_list" and paste the following groovy script there
Properties properties = new Properties()
File propertiesFile = new File('/var/jenkins_home/workspace/ss.properties')
def stream = propertiesFile.newDataInputStream()
properties.load(stream)
Enumeration e = properties.propertyNames();
def var = Arrays.asList(properties.getProperty(hostnames).split(','))
return var
The Referenced Parameter for the above would be "hostnames". That would mean, based on the selection of hostnames choice parameter, the data will be populated in this parameter. The choice type for this in my case is single select or you can set it according to your needs.
Save the configuration and click on build with parameters in your Jenkins Job, it should look like the following

How to set Object-array default event representation

I'm facing an error while I change the default event representation to Object array in this way:
Configuration configuration = new Configuration();
configuration.getEngineDefaults().getEventMeta().setDefaultEventRepresentation(EventUnderlyingType.OBJECTARRAY);
My events definitions are in create schema way. The epl file get successfully deploy, but when I insert a new Object[] event, an error rise telling that there are no event definition for this event name.
If more details are needed, please, ask for it.
After few tests, I can say that it's necessary to define every event type when the default event representation is set to object array.

Parent & Child field syncing

I want to write custom code to support the behavior below:
Parent A has a field called "ABC" Each Child task of Parent A has this field "ABC" as Read-only "ABC" edited in the Parent should filter down to each child every time its updated. Obviously this wouldn't be a true live sync but should call the same value as soon as the end user refreshes the page to see the updated value.
I really, want a Scripted Function that reads custom fields on a parent-task for change, and if changed carries that value over to the child-task. I am using "Script Runner" but I cannot figure out how to do this. Could you please provide the script that can be used in script runner and also i want to automate the job for all the issue's & sub-task in our instances.
I know this can be done through custom script listener but i need a script that can accomplish this task.
You can easily do this with a Scripted Field. The documentation is quite good.
Basically you first have to get your parent issue. The issue object has a method getParentObject() that does this:
Issue getParentObject()
If this issue is a subtask, return its parent.
Returns:
The parent Issue, or null if the issue is not a subtask.
And then you can get the value of your parent issue's custom field. Assuming this is a simple Text field, it would look something like this:
String customFieldName = "My fancy custom field"
CustomFieldManager customFieldManager = ComponentAccessor.getCustomFieldManager()
Collection<CustomField> customFields = customFieldManager.getCustomFieldObjectsByName(customFieldName)
parentIssue.getCustomFieldValue(customFields.first()) as String

Why DAC class is not saved in Acumatica?

Let's say I have following code:
DacClass cl = new DacClass();
//init fields of DacClass()
this.Persist();
but when I run this code in any graph, I'm getting different errors. Why?
You can't create DAC item in db in current graph. As mentioned in T200 manual you should create instance of graph and in created instance to call method persist. Another option is to use method PXDataBase.Insert. The first option is preferable for case if you need insertion with graph logic. The second option is preferable for cases if you need perfomance.

InSingletonScope using Ninject and a Windows Service

I re-posted this question as I think it is a bit vague. New Post
I am currently using a Windows Service that is on a 2 minute timer. I am using EF code first with a repository pattern for data access. I am using Ninject to inject my dependencies. I have the following bindings in my NinjectDependencyResolver class:
ConnectionStringSettings connectionStringSettings = ConfigurationManager.ConnectionStrings["Database"];
Bind<IDatabaseFactory>().To<DatabaseFactory>()
.InSingletonScope()
.WithConstructorArgument("connectionString", connectionStringSettings.Name);
Bind<IUnitOfWork>().To<UnitOfWork>().InSingletonScope();
Bind<IMyRepository>().To<MyRepository>().InSingletonScope();
When my service runs every 2 minutes I do some thing similar to this:
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
I am starting to see an error in my logs that say:
The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges.
Is it correct to use InSingeltonScope when using Ninject in a Windows Service? I believe I tried using different scopes like InTransientScope but I could only get InSingeltonScope to work with data access. Does the error message have anything to do with Scope or is it unrelated?
Assuming that the service is not the only process that operates on the database you shouldn't use Singleton. What happens in this case is that you are reusing a DBContext that has cached entities which are out of date.
The better way is to treat each timer execution of the service in a similar way like it is a web/wcf request and create a new job processor for the request.
var processor = factory.CreateRowsProcessor();
processor.ProcessRows(rows);
public class RowsProcessor
{
public Processor(UoW uow, ....)
{
...
}
public void ProcessRows(Rows[] rows)
{
foreach (var row in rows)
{
var existing = myRepository.GetById(row.Id);
if (existing == null)
{
existing = new Row();
myRepository.Add(existing);
unitOfWork.Commit();
}
}
}
}
Depending of the problem it might even better to put the loop outside and have a new processor for each single row.
Read http://www.planetgeek.ch/2011/12/31/ninject-extensions-factory-introduction/ for more information about factories. Also have a look at the InCallScope of the named scope extension if you need to inject the UoW into multiple classes. http://www.planetgeek.ch/2010/12/08/how-to-use-the-additional-ninject-scopes-of-namedscope/
InSingletonScope will create singleton context = one context for the whole lifetime of your service. It is very bad solution. Because context holds all objects from all previous time events its memory consumption grows and there are possibilities to get errors as the one you are receiving at the moment (but the error really can be unrelated to your singleton context but most likely it is not). The exception says that you have two different objects with the same key identifier tracked by the context - that is not allowed.
Instead of using singleton uow, repository and context use singleton factory and in each time even request new fresh instances from the factory. Dispose context at the end of the time event processing.

Resources