RAMJobStore (quartz_jobs.xml) to AdoJobStore Data Move - quartz.net

My team and I are trying to figure out a way to "load up" our Sql Server database with the Quartz.NET schema installed.
<add key="quartz.dataSource.default.provider" value="SqlServer-20"/>
For demo's, we've been storing our job-setups in .xml (quartz_jobs.xml).
My question is :
Is there a way to "load up" the scheduling data from .xml (quartz_jobs.xml) (Quartz.Simpl.RAMJobStore), and then "save it off" to a AdoJobStore (Quartz.Impl.AdoJobStore.JobStoreTX) ?
The reason is that our "start up" data could be easily written placed in the .xml.
Right now, the only way I see putting jobs into a AdoJobStore is "coding them up" in c# code through the Quartz.Net object model.
Or "playing back" some profiled TSQL (using Sql Profiler) :(
Direct question is above "(getting xml into sql-server)".....the higher level question is "How does one populate a AdoJobStore with start up data...that isn't "coding them up" in c# code.
EDIT:
I'm putting in my code that works......using Marko's (accepted as the answer) response.
My configuration file:
<quartz>
<add key="quartz.plugin.xml.type" value="Quartz.Plugin.Xml.XMLSchedulingDataProcessorPlugin, Quartz" />
<add key="quartz.plugin.xml.fileNames" value="~/Quartz_Jobs_001.xml" />
<add key="quartz.plugin.xml.ScanInterval" value="10" />
<add key="quartz.jobStore.type" value="Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" />
<add key="quartz.jobStore.driverDelegateType" value="Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz"/>
<add key="quartz.jobStore.dataSource" value="default"/>
<add key="quartz.dataSource.default.connectionString" value="Server=MyServer\MyInstance;Database=QuartzDB;Trusted_Connection=True;Application Name='quartz_config';"/>
<add key="quartz.dataSource.default.provider" value="SqlServer-20"/>
</quartz>
My code:
NameValueCollection config = (NameValueCollection)ConfigurationManager.GetSection("quartz");
ISchedulerFactory factory = new StdSchedulerFactory(config);
IScheduler sched = factory.GetScheduler();
sched.Clear();
sched.Start();
NOTE:
I had to call IScheduler.Start() for the values to persist to the database.
The consequence of adding this line:
<add key="quartz.plugin.xml.ScanInterval" value="10" />
was that I could add entries into the quartz_job.xml, and it would would (append-only) the data in the database (while the engine was running).
Aka, I can "add lookup data" (to the database) "on the fly"....without stopping the service. A nice little tidbit.
Removing a job requires a restart.

You should be able to do this quite easily. You can combine the XML configuration and ADO job store. This will make the XML processor update the jobs in the persistent store.
Here's minimal configuration:
NameValueCollection properties = new NameValueCollection();
properties["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz";
properties["quartz.jobStore.dataSource"] = "default";
properties["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz";
properties["quartz.dataSource.default.connectionString"] = "Server=(local);Database=quartz;Trusted_Connection=True;";
properties["quartz.dataSource.default.provider"] = "SqlServer-20";
// job initialization plugin handles our xml reading, without it defaults are used
properties["quartz.plugin.xml.type"] = "Quartz.Plugin.Xml.XMLSchedulingDataProcessorPlugin, Quartz";
properties["quartz.plugin.xml.fileNames"] = "~/quartz_jobs.xml";
// First we must get a reference to a scheduler
ISchedulerFactory sf = new StdSchedulerFactory(properties);
IScheduler sched = sf.GetScheduler();
And our XML configuration contains overwrite instructions so that job store is refreshed:
<?xml version="1.0" encoding="UTF-8"?>
<job-scheduling-data xmlns="http://quartznet.sourceforge.net/JobSchedulingData"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
version="2.0">
<processing-directives>
<overwrite-existing-data>true</overwrite-existing-data>
</processing-directives>
<schedule>
<job>
<name>jobName1</name>
<group>jobGroup1</group>
<description>jobDesciption1</description>
<job-type>Quartz.Examples.Example15.SimpleJob, Quartz.Examples</job-type>
<durable>true</durable>
<recover>false</recover>
</job>
<trigger>
<simple>
<name>simpleName</name>
<group>simpleGroup</group>
<description>SimpleTriggerDescription</description>
<job-name>jobName1</job-name>
<job-group>jobGroup1</job-group>
<start-time>1982-06-28T18:15:00.0Z</start-time>
<repeat-count>-1</repeat-count>
<repeat-interval>3000</repeat-interval>
</simple>
</trigger>
</schedule>
</job-scheduling-data>
You can also make the XML to auto refresh the store on change (checked every 10 seconds) if you define:
properties["quartz.plugin.xml.ScanInterval"] = "10";

Related

Serilog - With AppSettings Config how do I configure Sub Loggers & Include Certain Namespaces only?

I am trying to setup Serilog in a CMS that has some default logging configuration setup that we define as the CMS, however allowing developers using the CMS to extend and configure there own logging requirements by using the Serilog AppSettings Nuget package - https://github.com/serilog/serilog-settings-appsettings
I have some of this working and able to configure other Sinks in an external configuration file the problem I have and need help with, is how do I let developers configure a file sink to generate a txt logile that only includes their namespace?
With a C# class I know I can create a sub-logger and then use a Filter like so
.Filter.ByIncludingOnly(Matching.FromSource("DevelopersNamespace")) but using the Serilog Analyzer VS Extension - https://github.com/Suchiman/SerilogAnalyzer it cannot generate an example XML AppSettings configuration.
Here is a copy of my Logger Configuration in C#
Serilog.Debugging.SelfLog.Enable(msg => System.Diagnostics.Debug.WriteLine(msg));
//Set this environment variable - so that it can be used in external config file
//add key="serilog:write-to:RollingFile.pathFormat" value="%BASEDIR%\logs\log-{Date}.txt" />
Environment.SetEnvironmentVariable("BASEDIR", AppDomain.CurrentDomain.BaseDirectory, EnvironmentVariableTarget.Process);
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug() //Set to highest level of logging (as any sinks may want to restrict it to Errors only)
.Enrich.WithProcessId()
.Enrich.WithProcessName()
.Enrich.WithThreadId()
.Enrich.WithProperty("AppDomainId", AppDomain.CurrentDomain.Id)
.Enrich.WithProperty("AppDomainAppId", HttpRuntime.AppDomainAppId.ReplaceNonAlphanumericChars(string.Empty))
.Enrich.With<Log4NetLevelMapperEnricher>()
//Main .txt logfile - in similar format to older Log4Net output
//Ends with ..txt as Date is inserted before file extension substring
.WriteTo.File($#"{AppDomain.CurrentDomain.BaseDirectory}\App_Data\Logs\UmbracoTraceLog.{Environment.MachineName}..txt",
rollingInterval: RollingInterval.Day,
restrictedToMinimumLevel: LogEventLevel.Debug,
retainedFileCountLimit: null, //Setting to null means we keep all files - default is 31 days
outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss,fff} [P{ProcessId}/D{AppDomainId}/T{ThreadId}] {Log4NetLevel} {SourceContext} - {Message:lj}{NewLine}{Exception}")
//.clef format (Compact log event format, that can be imported into local SEQ & will make searching/filtering logs easier)
//Ends with ..txt as Date is inserted before file extension substring
.WriteTo.File(new CompactJsonFormatter(), $#"{AppDomain.CurrentDomain.BaseDirectory}\App_Data\Logs\UmbracoTraceLog.{Environment.MachineName}..json",
rollingInterval: RollingInterval.Day, //Create a new JSON file every day
retainedFileCountLimit: null, //Setting to null means we keep all files - default is 31 days
restrictedToMinimumLevel: LogEventLevel.Debug)
//Read any custom user configuration of logging from serilog config file
.ReadFrom.AppSettings(filePath: AppDomain.CurrentDomain.BaseDirectory + #"\config\serilog.config")
.CreateLogger();
Here is an example of the AppSettings configuration file that users will be able to modify their own sinks with.
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<!-- Controls log levels for all sinks (Set this higher than child sinks) -->
<add key="serilog:minimum-level" value="Verbose" />
<!-- Write to a user log file -->
<add key="serilog:using:File" value="Serilog.Sinks.File" />
<add key="serilog:write-to:File.path" value="%BASEDIR%\logs\warren-log.txt" /><!-- Can we do a relative path to website ? -->
<add key="serilog:write-to:File.restrictedToMinimumLevel" value="Debug" />
<add key="serilog:write-to:File.retainedFileCountLimit" value="32" /> <!-- Number of log files to keep (or remove value to keep all files) -->
<add key="serilog:write-to:File.rollingInterval" value="Day" /> <!-- Create a new log file every Minute/Hour/Day/Month/Year/infinite -->
<!-- TODO: How do I filter the file sink for customer to their own namespace ?? -->
</appSettings>
</configuration>
I am open to ideas and suggestions on how I can achieve this with a goal of allowing developers to configure their own sinks and to optionally filter to their own namespace if they wish to (as I doubt users will want to write their own sink code)
For anyone interested or was to come across this post at a later date this is how I solved it.
I used two configuration files one to configure the main logging pipeline and a user config for a sub-logger that they can then use filtering if required without effecting the main logging pipeline.
Serilog.Debugging.SelfLog.Enable(msg => System.Diagnostics.Debug.WriteLine(msg));
//Set this environment variable - so that it can be used in external config file
//add key="serilog:write-to:RollingFile.pathFormat" value="%BASEDIR%\logs\log.txt" />
Environment.SetEnvironmentVariable("BASEDIR", AppDomain.CurrentDomain.BaseDirectory, EnvironmentVariableTarget.Process);
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Verbose() //Set to highest level of logging (as any sinks may want to restrict it to Errors only)
.Enrich.WithProcessId()
.Enrich.WithProcessName()
.Enrich.WithThreadId()
.Enrich.WithProperty("AppDomainId", AppDomain.CurrentDomain.Id)
.Enrich.WithProperty("AppDomainAppId", HttpRuntime.AppDomainAppId.ReplaceNonAlphanumericChars(string.Empty))
.Enrich.With<Log4NetLevelMapperEnricher>()
//Main .txt logfile - in similar format to older Log4Net output
//Ends with ..txt as Date is inserted before file extension substring
.WriteTo.File($#"{AppDomain.CurrentDomain.BaseDirectory}\App_Data\Logs\UmbracoTraceLog.{Environment.MachineName}..txt",
rollingInterval: RollingInterval.Day,
restrictedToMinimumLevel: LogEventLevel.Verbose,
retainedFileCountLimit: null, //Setting to null means we keep all files - default is 31 days
outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss,fff} [P{ProcessId}/D{AppDomainId}/T{ThreadId}] {Log4NetLevel} {SourceContext} - {Message:lj}{NewLine}{Exception}")
//.clef format (Compact log event format, that can be imported into local SEQ & will make searching/filtering logs easier)
//Ends with ..txt as Date is inserted before file extension substring
.WriteTo.File(new CompactJsonFormatter(), $#"{AppDomain.CurrentDomain.BaseDirectory}\App_Data\Logs\UmbracoTraceLog.{Environment.MachineName}..json",
rollingInterval: RollingInterval.Day, //Create a new JSON file every day
retainedFileCountLimit: null, //Setting to null means we keep all files - default is 31 days
restrictedToMinimumLevel: LogEventLevel.Verbose)
//Read from main serilog.config file
.ReadFrom.AppSettings(filePath: AppDomain.CurrentDomain.BaseDirectory + #"\config\serilog.config")
//A nested logger - where any user configured sinks via config can not effect the main 'umbraco' logger above
.WriteTo.Logger(cfg =>
cfg.ReadFrom.AppSettings(filePath: AppDomain.CurrentDomain.BaseDirectory + #"\config\serilog.user.config"))
.CreateLogger();
Here is then a sample of the two configuration files:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<!-- Used to toggle the loge levels for the main Umbraco log files -->
<!-- Found at /app_data/logs/ -->
<!-- NOTE: Changing this will also flow down into serilog.user.config -->
<add key="serilog:minimum-level" value="Verbose" />
<!-- To write to new log locations (aka Sinks) such as your own .txt files, ELMAH.io, Elastic, SEQ -->
<!-- Please use the serilog.user.config file to configure your own logging needs -->
</appSettings>
</configuration>
And here is the configuration file where the user can then filter with their own namespace:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<!-- Controls log levels for all user-definied child sub-logger sinks configured here (Set this higher than child sinks) -->
<add key="serilog:minimum-level" value="Verbose" />
<!-- For Different Namespaces - Set different logging levels -->
<add key="serilog:minimum-level:override:Microsoft" value="Warning" />
<add key="serilog:minimum-level:override:Microsoft.AspNetCore.Mvc" value="Error" />
<add key="serilog:minimum-level:override:YourNameSpace" value="Information" />
<!-- All logs definied via user.config will contain this property (won't be in main Umbraco logs) -->
<add key="serilog:enrich:with-property:websiteName" value="Warrens Website" />
<!-- Write to a user log file -->
<add key="serilog:using:File" value="Serilog.Sinks.File" />
<add key="serilog:write-to:File.path" value="%BASEDIR%\logs\warren-log.txt" />
<add key="serilog:write-to:File.restrictedToMinimumLevel" value="Debug" /> <!-- I will be ignored as Debug as the user logging pipleine has it min set to Information, so only Info will flow through me -->
<add key="serilog:write-to:File.retainedFileCountLimit" value="32" /> <!-- Number of log files to keep (or remove value to keep all files) -->
<add key="serilog:write-to:File.rollingInterval" value="Day" /> <!-- Create a new log file every Minute/Hour/Day/Month/Year/infinite -->
<!-- Filters all above sink's to use this expression -->
<!-- Common use case is to include SourceType starting with your own namespace -->
<add key="serilog:using:FilterExpressions" value="Serilog.Filters.Expressions" />
<add key="serilog:filter:ByIncluding.expression" value="StartsWith(SourceContext, 'MyNamespace')" />
</appSettings>
</configuration>

Quartz.net durable and clustered

The config:
<quartz>
<add key="quartz.scheduler.instanceName" value="ChengongDemo" />
<add key="quartz.scheduler.instanceId" value="AUTO" />
<!--线程池-->
<add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" />
<add key="quartz.threadPool.threadCount" value="5" />
<add key="quartz.threadPool.threadPriority" value="Normal" />
<add key="quartz.jobStore.type" value="Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" />
<add key="quartz.jobStore.tablePrefix" value="QRTZ_" />
<add key="quartz.jobStore.driverDelegateType" value="Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz" />
<add key="quartz.jobStore.dataSource" value="myDb" />
<add key="quartz.dataSource.myDb.connectionString" value="Data Source=192.168.15.23;Initial Catalog=Quartz;User ID=sa;Password=123456789" />
<add key="quartz.dataSource.myDb.provider" value="SqlServer-20" />
<!-- 集群-->
<add key="quartz.jobStore.Clustered" value="true" />
<add key="quartz.jobStore.clusterCheckinInterval" value="600" />
</quartz>
The code:
public static void Run()
{
ISchedulerFactory sf = new StdSchedulerFactory();
Sched = sf.GetScheduler();
var jobDetail = new JobKey("job1", "group1");
var triggerKey = new TriggerKey("trigger1", "group1");
if (!Sched.CheckExists(jobDetail) && !Sched.CheckExists(jobDetail))
{
var job = JobBuilder.Creaenter code herete<TestJob>()
.WithIdentity(jobDetail)
.Build();
var trigger = TriggerBuilder.Create()
.WithIdentity(triggerKey)
.ForJob(job.Key)
.WithCronSchedule("*/2 * * ? * *")
.Build();
Sched.ScheduleJob(job, trigger);
}
Sched.Start();
}
The Result:
enter image description here
The service run normally. I shundown for a few seconds,Then I start again.The
Job ran several times at the same time.
Why?Someone can help me?Thank you for your help.
If you want to disallow "running at the same time", then apply the
DisallowConcurrentExecution
attribute
https://www.quartz-scheduler.net/documentation/quartz-2.x/tutorial/more-about-jobs.html
As in
[DisallowConcurrentExecution]
public class MyDoesNotRunConcurrentlyJob : IJob
{
HOWEVER, I think what you are experiencing is your first experience with a "misire" , or a job didn't execute on the utopia schedule....then there are rules around that.
I would read the below URL. While its for the java-quartz (and quartz.net is a "port over" (to .net), the below is the best mini resource for understanding misfires.
he even mentions your specific case
"the scheduler itself was down"
http://www.nurkiewicz.com/2012/04/quartz-scheduler-misfire-instructions.html
I will copy the header information here, in case the url ever "dies", you (or more likely future readers) can use the below text to search for the article.
Quartz scheduler misfire instructions explained
Sometimes Quartz is not capable of running your job at the time when you desired. There are three reasons for that:
all worker threads were busy running other jobs (probably with higher priority)
the scheduler itself was down
the job was scheduled with start time in the past (probably a coding error)

'Job Exists' error when using AdoJobStore

"Couldn't store job: Unable to store Job: 'QbBackupGroup.QbBackup', because one already exists with this identification."
I'm getting this anytime after the first run on a clean database (SQLCE). I've searched and found a few suggestions (i.e. here and here), but they pertain to running Quartz.NET without TopShelf.
The suggested code is based on standard Quartz.NET architecture (e.g. variables and values), but the Quartz.NET code for TopShelf is all delegates. Nothing seems to directly translate.
My code is below.
However... all that said... it may not matter, as my eventual goal is to be able to add/remove jobs/triggers at runtime without restarting the service. This hard-coded design may not even apply, but I haven't figured out how to do the runtime bit yet.
If I can get past this, the runtime bit is next. Unless the runtime bit negates the need for this. Catch-21.
Please advise.
Service:
Sub Main()
Dim oSchedule As Action(Of SimpleScheduleBuilder)
Dim oTrigger As Func(Of ITrigger)
Dim oDetail As Func(Of IJobDetail)
Dim oJob As Action(Of QuartzConfigurator)
oSchedule = Function(ScheduleBuilder) As SimpleScheduleBuilder
Return ScheduleBuilder.WithIntervalInSeconds(5).RepeatForever
End Function
oTrigger = Function() As ITrigger
Return TriggerBuilder.Create.WithIdentity(QbBackup.Job.Trigger, QbBackup.Job.Group).WithSimpleSchedule(oSchedule).Build
End Function
oDetail = Function()
Return JobBuilder.Create(Of QbBackup.Job).WithIdentity(QbBackup.Job.Name, QbBackup.Job.Group).Build
End Function
oJob = Function(Configurator As QuartzConfigurator)
Return Configurator.WithJob(oDetail).AddTrigger(oTrigger)
End Function
HostFactory.Run(Sub(Configurator)
Configurator.Service(Of Manager)(Sub(Service)
Service.ConstructUsing(Function(Factory) As ServiceControl
Return New Manager
End Function)
Service.WhenStarted(Function(Notifier, HostControl) As Boolean
Return Notifier.StartService(HostControl)
End Function)
Service.WhenStopped(Function(Notifier, HostControl) As Boolean
Return Notifier.StopService(HostControl)
End Function)
Service.ScheduleQuartzJob(oJob)
End Sub)
Configurator.SetDescription(SchedulerInfo.Description)
Configurator.SetServiceName(SchedulerInfo.Product)
Configurator.SetDisplayName(SchedulerInfo.Title)
Configurator.StartAutomatically()
Configurator.RunAsLocalSystem()
End Sub)
End Sub
Job:
Imports Common.Logging
Public Class Job
Implements IJob
Private Shared Logger As ILog = LogManager.GetLogger(GetType(Job))
Public Sub Execute(Context As IJobExecutionContext) Implements IJob.Execute
Try
Job.Logger.Info(Now.ToString)
Catch ex As Exception
Throw New JobExecutionException(ex.Message, ex)
End Try
End Sub
Public Shared ReadOnly Property Name As String
Get
Return QbBackupInfo.Product
End Get
End Property
Public Shared ReadOnly Property Trigger As String
Get
Return "{0}Trigger".ToFormat(Job.Name)
End Get
End Property
Public Shared ReadOnly Property Group As String
Get
Return "{0}Group".ToFormat(Job.Name)
End Get
End Property
End Class
App.config:
<quartz>
<!-- Configure Scheduler -->
<add key="quartz.scheduler.instanceName" value="Scheduler" />
<add key="quartz.scheduler.instanceId" value="Scheduler" />
<!-- Configure Thread Pool -->
<add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" />
<add key="quartz.threadPool.threadCount" value="10" />
<add key="quartz.threadPool.threadPriority" value="Normal" />
<!-- Configure Job Store -->
<add key="quartz.jobStore.misfireThreshold" value="60000" />
<add key="quartz.jobStore.type" value="Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" />
<add key="quartz.jobStore.useProperties" value="true" />
<add key="quartz.jobStore.dataSource" value="default" />
<add key="quartz.jobStore.tablePrefix" value="QRTZ_" />
<add key="quartz.jobStore.lockHandler.type" value="Quartz.Impl.AdoJobStore.UpdateLockRowSemaphore, Quartz" />
<!-- Configure Data Source -->
<add key="quartz.dataSource.default.connectionString" value="Data Source=C:\ProgramData\Scheduler\Scheduler.sdf;Max Database Size=4091;Persist Security Info=False;" />
<add key="quartz.dataSource.default.provider" value="SqlServerCe-400" />
</quartz>
The combination of a job's group and its name are the job's unique key in Quartz.Net. They should be unique across all jobs. The same happens for triggers. The fix is to change the job's name for each job if you really need to have multiple jobs of the same type running. You can also just have one job with multiple triggers instead. Each trigger has its own data map so you could add custom information there. One way to get around this is to append a timestamp the job's name so that they are all uniquely named.

Create job in code that persists the "quartz_jobs.xml" file

I have a quartz_jobs.xml file that has jobs defined in it.
I can load the quartz_jobs.xml file configuration and jobs start firing.
Aka, "reading" the jobs from the quartz_jobs.xml file works fine.
However, if I manually add a job to the IScheduler, this manually added job will start running......(along with jobs defined in the quartz_jobs.xml file)....
BUT the job is not ~written to the xml.
Is there a way to add a job to the IScheduler and have it write-back to the quartz_jobs.xml file?
Note, when I wired up the exact same code to a AdoStore, and I call the IScheduler.Start(), the jobs do get added to the database tables (aka, they persist).
But the same code wired to a RamStore running against Xml does not "save" the jobs to the quartz_jobs.xml file.
Thanks.
Here is my quartz.config (not the jobs) file.
<quartz>
<add key="quartz.plugin.jobInitializer.type" value="Quartz.Plugin.Xml.XMLSchedulingDataProcessorPlugin" />
<add key="quartz.scheduler.instanceName" value="DefaultQuartzScheduler" />
<add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" />
<add key="quartz.threadPool.threadCount" value="10" />
<add key="quartz.threadPool.threadPriority" value="2" />
<add key="quartz.jobStore.misfireThreshold" value="60000" />
<add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" />
<add key="quartz.plugin.jobInitializer.fileNames" value="quartz_jobs.xml" />
<add key="quartz.plugin.jobInitializer.failOnFileNotFound" value="true" />
<add key="quartz.plugin.jobInitializer.scanInterval" value="120" />
</quartz>
http://quartznet.sourceforge.net/apidoc/2.0/html/html/2909678f-44c6-6e13-afa5-c50e7b5ee435.htm
XMLSchedulingDataProcessorPlugin Class Quartz.NET API Documentation
This plugin loads XML file(s) to add jobs and schedule them with triggers as the scheduler is initialized, and can optionally periodically scan the file for changes.
I don't think you can.
XMLSchedulingDataProcessor doesn't have any methods to serialize your jobs back to the file and QuartzXmlConfiguration20 seems to have only read-only collections.
I've tried to do some experiments.
Apparently you can manage to whole process of loading and processing the xml file:
ITypeLoadHelper loadHelper = new SimpleTypeLoadHelper();
loadHelper.Initialize();
XMLSchedulingDataProcessor processor = new XMLSchedulingDataProcessor(loadHelper);
processor.OverWriteExistingData = true;
processor.ProcessFileAndScheduleJobs("my_jobs.xml", Scheduler);
(so you do not have to use any configuration in your config file)
but, again, there's no way to append extra elements.
Another way could be to deserialize the xml file and try to manipulate it:
string xml = string.Empty;
using (var xmlJobFile = new System.IO.StreamReader("my_jobs.xml"))
{
xml = xmlJobFile.ReadToEnd();
}
XmlSerializer xs = new XmlSerializer(typeof(QuartzXmlConfiguration20));
QuartzXmlConfiguration20 data = (QuartzXmlConfiguration20)xs.Deserialize(new StringReader(xml));
if (data == null)
{
throw new SchedulerConfigException("Job definition data from XML was null after deserialization");
}
But it gets too complicated.
I reckon that the best option is to use AdoJobStore.

iBatis - select environment using XML

I have this configuration in ibatis-config.xml
<configuration>
<properties resource="collector.properties"/>
<environments default="development">
<environment id="development">
<transactionManager type="JDBC" />
<dataSource type="POOLED">
<property name="driver" value="${dev.jdbc.driver}" />
<property name="url" value="${dev.jdbc.url}" />
</dataSource>
</environment>
<environment id="test">
<transactionManager type="JDBC" />
<dataSource type="POOLED">
<property name="driver" value="${test.jdbc.driver}" />
<property name="url" value="${test.jdbc.url}" />
</dataSource>
</environment>
</environments>
<mappers>
</mappers>
</configuration>
As shown it will load datasource from <environment id="development">
QUESTION: Is it possible at run time switch to use <environment id="test"> without modifying XML? For example - I have a test file where I'm using SqlSessionFactory and want to set it programmatically to use test environment?
SqlSessionFactoryBuilder.build() method can select a specific environment in XML.
For example,
private Reader reader;
private SqlSessionFactory sqlSessionFactorys;
private SqlSession session;
reader = Resources.getResourceAsReader("ibatis-config.xml");
sqlSessionFactorys = new SqlSessionFactoryBuilder().build(reader, "test");
testSession = sqlSessionFactorys.openSession(); // test env
sqlSessionFactorys = new SqlSessionFactoryBuilder().build(reader, "development");
devSession = sqlSessionFactorys.openSession(); // dev env
According to this site:
http://codenav.org/code.html?project=/org/mybatis/mybatis/3.2.5&path=/Source%20Packages/org.apache.ibatis.session/SqlSessionFactoryBuilder.java
The build() method closes the reader/inputstream before returning SqlSessionFactory now. So you will need to open a new reader/stream in order to load the second session. I discovered this when I separated out my account/security tables to a separate database from the main application DB. My first go around I kept getting errors when the bean was trying to load the session factory due to an input stream error (closed).
e.g.
try {
inputStream = Resources.getResourceAsStream(MYBATIS_CONFIG_PATH);
prodDbSqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream, prodDbEnvironment);
inputStream = Resources.getResourceAsStream(MYBATIS_CONFIG_PATH);
securityDbSqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream, securityDbEnvironment);
} catch (IOException ex) {
String msg = "Unable to get SqlSessionFactory";
CustomizedLogger.LOG(Level.SEVERE, this.getClass().getCanonicalName(), "methodName", msg, ex);
}
Although I put them in separate try catch blocks so that I know which one failed specifically right away in the log file.
I also implement this as a singleton so that it only has to load load resources once.
Context: I run this in a Java EE container and use MyBatis for straight forward queries and for where I would use native queries since it is a much simpler and straight forward framework. I might switch to using it over JPA everywhere, but that is still up for debate.

Resources