not able to run job in batch - x++

static void Job47(Args _args)
{
str path,stx;
TreeNodeIterator iter;
TreeNode treeNode, treeNodeToRelease;
Map dictMenuDisplay;
FormName formName;
MenuItemName menuItemName;
container conMenu;
int i,n;
;
for (n=1;n<=1;n++)
{
info::messageWinAddLine(strfmt("iter:%1",n));
path ="Menu Items\\Display";
dictMenuDisplay = new Map(Types::String,Types::Container);
treenode = Treenode::findNode(path);
iter = treenode.AOTiterator();
treenode = iter.next();
while (treenode)
{
formName = treenode.AOTgetProperty("Object");
menuItemName = treenode.AOTname();
if (dictMenuDisplay.exists(formName))
{
conMenu = dictMenuDisplay.lookup(formName);
conMenu = conIns(conMenu,conlen(conMenu)+1,menuItemName);
dictMenuDisplay.insert(formName,conMenu);
}
else
dictMenuDisplay.insert(formName,[menuItemName]);
treenode = iter.next();
}
}
}
When I run the above job in batch it shows the following error "The server side impersonated(RunAs) session tried to invoke a method available for client-side processing only" and points to the line
info::messageWinAddLine(strfmt("iter:%1",n));
I have tried putting false in the method runsImpersonated() in class RunbaseBatch. But doesnt seems to work either.
I am new to AX2009 so dont really understand what it means to running job in client or in server, kindly lead me to the right direction.

First, remove the modification to the RunBaseBatch class. That method is mean to be overridden in any classes extending that class (inheritance). Take a look at the class "Tutorial_RunbaseBatch" for insight on how the RunBaseBatch pattern can be achieved.
Now, also consider that when you run x++ code, it can be run either client-side or server-side. You can have methods locked for which side you allow the code to run. The GLobal::info-method can run both client- and server-side.
When you activate a batch to run a class (not a job), a class extending RunBaseBatch, then the Batch Framework will run the class server side according to your settings. Your code should then be independent of client-side, meaning there can not be any line of code that requires access to client-side. WinAPI::moveFile is an example.
Hope this helps a bit.

Related

Can an Azure Function be Executed for Multiple Environments

I've encountered a dependency injection scenario which I cannot find a way through.
We currently have an Azure function.
We are using dependency injection via the FunctionsStartup attribute.
That all works fine, until I get asked to make it work for multiple environments.
The tester found it too onerous to deploy to 7 different environments, so I was asked to re-jig the function so that it runs (in a loop) for those environments.
That means 7 different IConfigurations and somehow having 7 separate compartmentalised IOC registrations of services.
I can't think of a way of doing that, without significantly re-structuring the way abstractions are being resolved. Even if you set up registrations in a loop and inject an IEnumerable of a service, when it goes to resolve a child dependency, it just pulls the last one registered, rather than the one which was meant to correlate with the current item being iterated.
So, something like this (using Autofac):
Registration
foreach (var configuration in configurations)
{
containerBuilder.Register<ICosmosDbService<AccountUsage>>(sp =>
{
var dBConfig = CosmosDBHelper.GetProjectDatabaseConfig(configuration.Value, Project.Jupiter);
return CosmosClientInitializer<AccountUsage>.Initialize(dBConfig);
}).As<ICosmosDbService<AccountUsage>>();
}
Usage
private readonly IEnumerable<IAccountUsageService> _accountUsageService;
public JobScheduler(IEnumerable<IAccountUsageService> accountUsageService)
{
_accountUsageService = accountUsageService;
}
[FunctionName("JobScheduler")]
public async Task Run([TimerTrigger("0 */2 * * * *")] TimerInfo myTimer, ILogger log)
{
log.LogInformation($"Job Scheduler Timer trigger function executed at: {DateTime.Now}");
try
{
foreach (var usageService in _accountUsageService)
{
var logs = await usageService.GetCurrentAccountUsage("gfkjdsasjfa");
// ...
}
}
I realise this kind of DI usage is not ideal (and does not even work).
Is there a way to structure an Azure Function such that it can execute for different configurations in a compartmentalised manner? Or is this really just fighting against the technology?
You've got a couple of ways to do this - either inject the right dependencies into the function constructor, or resolve them dynamically using a service-locater type approach with a named instance.
Let's consider the second approach and what it would mean for your implementation. As you demonstrated, you'd be looping through your instances and resolving the dependency you want to use, then invoking it
foreach (var usageService in _accountUsageService)
{
var logs = await usageService.GetCurrentAccountUsage("named-instance");
logs.DoSomething();
}
This is technically possible, but now you're doing batch processing - you're doing more than once piece of work that's been triggered by a single event (the timer object), which means you have to deal with a couple of extra problems. What should you do if there's a failure with one of the instances, and what to do if one of the instances is running slowly?
Ideally, you want functions to do the smallest bit of work they can, and complete quickly - You don't want failure or slowness with one particular instance impacting the other instances. By breaking it down to the smallest piece of work (think, one event trigger does one piece of work) then you can take advantage of the functions runtime for things like retries on failures, and threading and concurrency is now being done for you by the runtime.
You could then think about a couple of ways you could do this. a) multiple function signatures and a service resolver approach, e.g.
public class JobScheduler
{
public JobScheduler(IEnumerable<IAccountUsageService> accountUsageService)
{
_accountUsageService = accountUsageService;
}
[FunctionName("FirstInstance")]
public Task FirstInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
var logs = await _accountUsageService.GetNamedInstance("instance-a");
logs.DoSomething();
}
[FunctionName("SecondInstance")]
public Task SecondInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
var logs = _accountUsageService.GetNamedInstance("instance-b");
logs.DoSomething();
}
}
or b), multiple classes with the necessary dependencies injected
public class JobSchedulerFirstInstance
{
public JobSchedulerFirstInstance(ILogs logs)
{
_logs = logs;
}
[FunctionName("FirstInstance")]
public Task FirstInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
_logs.DoSomething();
}
}
I'd personally lean towards multiple classes approach, and register named instances with my container. A bit of extra wire up work needed, but you'll end up with lots of small classes that all look very similar that are basically jus t plumbing that the functions runtime executes.

Run a long running job using the fire and forget strategy with Thymeleaf in Reactor and r2dbc

I am trying to achieve a fire and forget type of effect with webflux, thymeleaf and r2dbc. I have two endpoints, one to add an employee and another to list all employees. I want to simulate a slow database access so I have a thread sleep of several seconds before I call the DB.
Now, the effect I expect to see when I call /add is that my controller returns immediately and the page add is rendered at once. However, I'm not sure how to achieve this. With the current code nap() happens before I can return a Mono. In other words, I'm trying to run a long running job in the background without blocking the controller.
I have the following model:
#Data
public class Employee {
#Id
private Long id;
private String name;
}
The annotated controller has following methods:
#GetMapping(value = "/")
public String home(Model model) {
model.addAttribute("employees", repo.findAll());
return "home";
}
#GetMapping(value = "/add")
public Mono<String> add() {
return Mono
.defer(this::getEmployee)
.doOnNext(e -> repo.save(e).subscribe())
.thenReturn("add");
}
private Mono<Employee> getEmployee() {
final var e = new Employee();
e.setName("John");
nap(); // calls thread sleep for a few sec
return Mono.just(e);
}
My question is how can I wrap the long running job but at the same time preserve a Controller based notation (instead of functional) and also render the add page immediately? I am aware of some similar questions like this and this, but I don't seem to be able to achieve the behaviour I need.
Edit:
lkatiforis' suggestion and this SO question were a push in the right direction. I had to adjust their example a bit because the employee didn't persist. The change is in add():
public String add() {
Mono.just(employee)
.delayElement(Duration.ofSeconds(5))
.doOnNext(e -> repo.save(e).subscribe())
.subscribe();
return "add";
}
employee is just an instance of Employee with a populated name. The delayElement operator pauses for 5 seconds without blocking. Finally, I had to call subscribe() on repo.save() and at the end in order for it to work. I assume that if subscribe() is only called on doOnNext() then the main chain that starts with Mono.just() is never executed.
I guess nap() method executes Thread.sleep or something similar, right? Thread.sleep is blocking the main thread making the application unresponsive. You can use delayElements operator to simulate a long-running operation:
private Mono<Employee> getEmployee() {
final var e = new Employee();
e.setName("John");
return Mono.just(e).delayElement(Duration.ofSeconds(5));
}

Amazon SWF queries

Over the last couple of years, I have done a fair amount of work on Amazon SWF, but the following points are still unclear to me and I am not able to find any straight forward answers on any forums yet.
These are pretty basic requirements I suppose, sure others might have come across too. Would be great if someone can clarify these.
Is there a simple way to return a workflow execution result (maybe just something as simple as boolean) back to workflow starter?
Is there a way to catch Activity timeout exception, so that we can do run customised actions in such scenarios?
Why doesn't WorkflowExecutionHistory contains Activities, why just Events?
Why there is no simple way of restarting a workflow from the point it failed?
I am considering to use SWF for more business processes at my workplace, but these limitations/doubts are holding me back!
FINAL WORKING SOLUTION
public class ReturnResultActivityImpl implements ReturnResultActivity {
SettableFuture future;
public ReturnResultActivityImpl() {
}
public ReturnResultActivityImpl(SettableFuture future) {
this.future = future;
}
public void returnResult(WorkflowResult workflowResult) {
System.out.print("Marking future as Completed");
future.set(workflowResult);
}
}
public class WorkflowResult {
public WorkflowResult(boolean s, String n) {
this.success = s;
this.note = n;
}
private boolean success;
private String note;
}
public class WorkflowStarter {
#Autowired
ReturnResultActivityClient returnResultActivityClient;
#Autowired
DummyWorkflowClientExternalFactory dummyWorkflowClientExternalFactory;
#Autowired
AmazonSimpleWorkflowClient swfClient;
String domain = "test-domain;
boolean isRegister = true;
int days = 7;
int terminationTimeoutSeconds = 5000;
int threadPollCount = 2;
int taskExecutorThreadCount = 4;
public String testWorkflow() throws Exception {
SettableFuture<WorkflowResult> workflowResultFuture = SettableFuture.create();
String taskListName = "testTaskList-" + RandomStringUtils.randomAlphabetic(8);
ReturnResultActivity activity = new ReturnResultActivityImpl(workflowResultFuture);
SpringActivityWorker activityWorker = buildReturnResultActivityWorker(taskListName, Arrays.asList(activity));
DummyWorkflowClientExternalFactory factory = new DummyWorkflowClientExternalFactoryImpl(swfClient, domain);
factory.getClient().doSomething(taskListName)
WorkflowResult result = workflowResultSettableFuture.get(20, TimeUnit.SECONDS);
return "Call result note - " + result.getNote();
}
public SpringActivityWorker buildReturnResultActivityWorker(String taskListName, List activityImplementations)
throws Exception {
return setupActivityWorker(swfClient, domain, taskListName, isRegister, days, activityImplementations,
terminationTimeoutSeconds, threadPollCount, taskExecutorThreadCount);
}
}
public class Workflow {
#Autowired
private DummyActivityClient dummyActivityClient;
#Autowired
private ReturnResultActivityClient returnResultActivityClient;
#Override
public void doSomething(final String resultActivityTaskListName) {
Promise<Void> activityPromise = dummyActivityClient.dummyActivity();
returnResult(resultActivityTaskListName, activityPromise);
}
#Asynchronous
private void returnResult(final String taskListname, Promise waitFor) {
ActivitySchedulingOptions schedulingOptions = new ActivitySchedulingOptions();
schedulingOptions.setTaskList(taskListname);
WorkflowResult result = new WorkflowResult(true,"All successful");
returnResultActivityClient.returnResult(result, schedulingOptions);
}
}
The standard pattern is to host a special activity in the workflow starter process that is used to deliver the result. Use a process specific task list to make sure that it is routed to a correct instance of the starter. Here are the steps to implement it:
Define an activity to receive the result. For example "returnResultActivity". Make this activity implementation to complete the Future passed to its constructor upon execution.
When the workflow is started it receives "resultActivityTaskList" as an input argument. At the end the workflow calls this activity with a workflow result. The activity is scheduled on the passed task list.
The workflow starter creates an ActivityWorker and an instance of a Future. Then it creates an instance of "returnResultActivity" with that future as a constructor parameter.
Then it registers the activity instance with the activity worker and configures it to poll on a randomly generated task list name. Then it calls "start workflow execution" passing the generated task list name as an input argument.
Then it wait on the Future to complete. The future.get() is going to return the workflow result.
Yes, if you are using the AWS Flow Framework a timeout exception is thrown when activity is timed out. If you are not using the Flow framework than you are making your life 100 times harder. BTW the workflow timeout is thrown into a parent workflow as a timeout exception as well. It is not possible to catch a workflow timeout exception from within the timing out instance itself. In this case it is recommended to not rely on workflow timeout, but just create a timer that would fire and notify workflow logic that some business event has timed out.
Because a single activity execution has multiple events associated to it. It should be pretty easy to write code that converts history to whatever representation of activities you like. Such code would just match the events that relate to each activities. Each event always has a reference to the related events, so it is easy to roll them up into higher level representation.
Unfortunately there is no easy answer to this one. Ideally SWF would support restarting workflow by copying its history up to the failure point. But it is not supported. I personally believe that workflow should be written in a way that it never fails but always deals with failures without failing. Obviously it doesn't work in case of failures due to unexpected conditions. In this case writing workflow in a way that it can be restarted from the beginning is the simplest approach.

breeze: creating inheritance in client-side model

I'm having a weird issue with the configureMetadataStore.
My model:
class SourceMaterial {
List<Job> Jobs {get; set;}
}
class Job {
public SourceMaterial SourceMaterial {get; set;}
}
class JobEditing : Job {}
class JobTranslation: Job {}
Module for configuring Job entities:
angular.module('cdt.request.model').factory('jobModel', ['breeze', 'dataService', 'entityService', modelFunc]);
function modelFunc(breeze, dataService, entityService) {
function Ctor() {
}
Ctor.extend = function (modelCtor) {
modelCtor.prototype = new Ctor();
modelCtor.prototype.constructor = modelCtor;
};
Ctor.prototype._configureMetadataStore = _configureMetadataStore;
return Ctor;
// constructor
function jobCtor() {
this.isScreenDeleted = null;
}
function _configureMetadataStore(entityName, metadataStore) {
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
}
function jobInitializer(job) { /* do stuff here */ }
}
Module for configuring JobEditing entities:
angular.module('cdt.request.model').factory(jobEditingModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobEditing', metadataStore)
}
}
Module for configuring JobTranslation entities:
angular.module('cdt.request.model').factory(jobTranslationModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobTranslation', metadataStore)
}
}
Then Models are configured like this :
JobEditingModel.configureMetadataStore(dataService.manager.metadataStore);
JobTranslationModel.configureMetadataStore(dataService.manager.metadataStore);
Now when I call createEntity for a JobEditing, the instance is created and at some point, breeze calls setNpValue and adds the newly created Job to the np SourceMaterial.
That's all fine, except that it is added twice !
It happens when rawAccessorFn(newValue); is called. In fact it is called twice.
And if I add a new type of job (hence I register a new type with the metadataStore), then the new Job is added three times to the np.
I can't see what I'm doing wrong. Can anyone help ?
EDIT
I've noticed that if I change:
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
to
metadataStore.registerEntityTypeCtor(entityName, null, jobInitializer);
Then everything works fine again ! So the problem is registering the same jobCtor function. Should that not be possible ?
Our Bad
Let's start with a Breeze bug, recently discovered, in the Breeze "backingStore" model library adapter.
There's a part of that adapter which is responsible for rewriting data properties of the entity constructor so that they become observable and self-validating and it kicks in when register a type with registerEntityTypeCtor.
It tries to keep track of which properties it has rewritten. The bug is that it records the fact of rewrite on the EntityType rather than on the constructor function. Consequently, every time you registered a new type, it failed to realize that it had already rewritten the properties of the base Job type and re-wrapped the property.
This was happening to you. Every derived type that you registered re-wrapped/re-wrote the properties of the base type (and of its base type, etc).
In your example, a base class Job property would be re-written 3 times and its inner logic executed 3 times if you registered three of its sub-types. And the problem disappeared when you stopped registering constructors of sub-types.
We're working on a revised Breeze "backingStore" model library adapter that won't have this problem and, coincidentally, will behave better in test scenarios (that's how we found the bug in the first place).
Your Bad?
Wow that's some hairy code you've got there. Why so complicated? In particular, why are you adding a one-time MetadataStore configuration to the prototypes of entity constructor functions?
I must be missing something. The code to register types is usually much smaller and simpler. I get that you want to put each type in its own file and have it self-register. The cost of that (as you've written it) is enormous bulk and complexity. Please reconsider your approach. Take a look at other Breeze samples, Zza-Node-Mongo for example.
Thanks for reporting the issue. Hang in there with us. A fix should be arriving soon ... I hope in the next release.

File IO within an ASP.NET MVC Action

Is it possible to use some kind of 'critical section' so that it is safe to do something like the following within an action...
public ActionResult GenerateTasks()
{
string someDir = ....
if (!Directory.Exists(someDir))
{
Directory.CreateDirectory(someDir);
}
...
}
You can do this only by using a system-wide mutex. Process or app-domain locking primitives will fail to work under certain conditions (for instance when an application pool is recycled).
However, for the specific case here that's not necessary: Directory.CreateDirectory already does implement an existence check on its own, so that you shouldn't need to do anything in this regard.
I'm assuming by your question that the concurrent safety you're interested in is whether or not the directory is created between the Directory.Exists and the Directory.CreateDirectory on a different thread. (If you're concerned about Directory.CreateDirectory throwing an exception if the directory already exists, it won't.) If so, and this is the point in your code that will have the potential to do that, then you can simply use a lock object to make these set of operations safe across multiple threads:
private static object lockObject = new object();
public ActionResult GenerateTasks()
{
string someDir = ....
lock(lockObject)
{
if (!Directory.Exists(someDir))
{
Directory.CreateDirectory(someDir);
}
}
...
}
This does not however make any garauntees that the directory isn't being interacted with outside of your control, say, in another application process.

Resources