How to change the level of AX info messages - x++

In Dynamics AX 2009 I am trying to determine the level of indentation of an info message. What I want is something similar to this:
Prefix
Info1
Info2
Prefix2
Info3
I found this:
http://www.doens.be/2010/05/the-ax-infolog/
But don't want to use a loop, so I thought something like this might work:
setprefix("Prefix");
{
info("Info1");
info("Info2");
}
setprefix("Prefix2");
{
info("Info3");
}
But it doesn't. Is there a way to do this in x++, and what are the rules as to what indent level is currently active?

setPrefix in AX sets (adds) the prefix for the current execution scope, and when leaving the scope the prefix is automatically reset to the previous level. You can use getPrefix to check the current execution prefix.
2 hacks can help you recieve the expected result:
#1
static void TestJob(Args _args)
{
void sub1()
{
setprefix("Prefix");
info("Info1");
info("Info2");
}
void sub2()
{
setprefix("Prefix2");
info("Info3");
}
;
setPrefix("Main");
sub1();
sub2();
}
#2
static void TestJob(Args _args)
{
setPrefix("Main");
info("Prefix\tInfo1");
info("Prefix\tInfo2");
info("Prefix2\tInfo3");
}

Related

Nested descendant pattern matches

I'm trying to find all method calls and the classes that contain them. If I understand correctly, pattern matches perform backtracking to match in all possible ways.
Take the following java code.
package main;
public class Main {
public static void main(String[] args) {
System.out.println("hello world");
System.out.println("hello again");
}
}
I'm loading the code with createAstsFromDirectory.
rascal>ast = createAstsFromDirectory(|home:///multiple-method-calls|, false);
I'm trying to find both calls to println. The following code matches once:
void findCalls(set[Declaration] ast)
{
visit(ast)
{
case \class(_,_,_,/\methodCall(_,_,str methodName,_)):
println("<methodName>");
}
}
rascal>findCalls(ast);
println
ok
This code matches four times:
void findCalls(set[Declaration] ast)
{
visit(ast)
{
case /\class(_,_,_,/\methodCall(_,_,str methodName,_)):
println("<methodName>");
}
}
rascal>findCalls(ast);
println
println
println
println
ok
How must the pattern look like to match exactly twice?
Related question, how to access the class name? When trying to access the class name I get an error message.
void findCalls(set[Declaration] ast)
{
visit(ast)
{
case /\class(str className,_,_,/\methodCall(_,_,str methodName,_)):
println("<className> <methodName>");
}
}
findCalls(ast);
Main println
|project://personal-prof/src/Assignment13Rules.rsc|(3177,9,<141,16>,<141,25>): Undeclared variable: className
It looks like the first match has className bound correctly to "Main", but the second one does not.
I think I would write this:
void findCalls(set[Declaration] ast) {
for(/class(_, _, _, /methodCall(_,_,str methodName,_)) <- ast) {
println("<methodName>");
}
}
The for loop goes for every class it can find, through every methodCall it can find inside and thus twice for the example you gave.
Your second try goes wrong and matches too often: if you nest a / at the top of the case of a visit, you visit every position in the tree once, and the traverse the entire sub-tree again including the root node again. So you would get each call twice.
Your first try goes wrong because the top-level of a case pattern of a visit does not back-track on itself, it finds the first match for the entire pattern and then stops. So the nested / is only matched once and then the body is executed.
In the for loop solution, the for-loop (unlike the visit) will try all possible matches until it stops, so that's the way to go. There is yet another solution closer to your original plan:
void findCalls(set[Declaration] ast) {
visit (ast) {
case class(_, _, _, body) :
for (/methodCall(_,_,str methodName,_) <- body) {
println("<methodName>");
}
}
}
This also works just like the for loop, it first finds all the classes one-by-one via the visit and then goes through all the nested matches via the for loop.
Finally you could also nest the visit itself to get the right answer:
void findCalls(set[Declaration] ast) {
visit (ast) {
case class(_, _, _, body) :
visit(body) {
case methodCall(_,_,str methodName,_): {
println("<methodName>");
}
}
}
}
WRT to the className thing, it appears that the combination of visit and top-level deep match on a case like so: case /<pattern> there is a bug in the Rasca interpreter with a loss of variable bindings in the deep pattern. So pls avoid that pattern (I don't think you need it), and if you feel like it, submit an issue report on github?
In the for-loop case, this will simply work as expected:
for(/class(str className, _, _, /methodCall(_,_,str methodName,_)) <- ast) {
println("<className>::<methodName>");
}

How chain indefinite amount of flatMap operators in Reactor?

I have some initial state in my application and a few of policies that decorates this state with reactively fetched data (each of policy's Mono returns new instance of state with additional data). Eventually I get fully decorated state.
It basically looks like this:
public interface Policy {
Mono<State> apply(State currentState);
}
Usage for fixed number of policies would look like that:
Flux.just(baseState)
.flatMap(firstPolicy::apply)
.flatMap(secondPolicy::apply)
...
.subscribe();
It basically means that entry state for a Mono is result of accumulation of initial state and each of that Mono predecessors.
For my case policies number is not fixed and it comes from another layer of the application as a collection of objects that implements Policy interface.
Is there any way to achieve similar result as in the given code (with 2 flatMap), but for unknown number of policies? I have tried with Flux's reduce method, but it works only if policy returns value, not a Mono.
This seems difficult because you're streaming your baseState, then trying to do an arbitrary number of flatMap() calls on that. There's nothing inherently wrong with using a loop to achieve this, but I like to avoid that unless absolutely necessary, as it breaks the natural reactive flow of the code.
If you instead iterate and reduce the policies into a single policy, then the flatMap() call becomes trivial:
Flux.fromIterable(policies)
.reduce((p1,p2) -> s -> p1.apply(s).flatMap(p2::apply))
.flatMap(p -> p.apply(baseState))
.subscribe();
If you're able to edit your Policy interface, I'd strongly suggest adding a static combine() method to reference in your reduce() call to make that more readable:
interface Policy {
Mono<State> apply(State currentState);
public static Policy combine(Policy p1, Policy p2) {
return s -> p1.apply(s).flatMap(p2::apply);
}
}
The Flux then becomes much more descriptive and less verbose:
Flux.fromIterable(policies)
.reduce(Policy::combine)
.flatMap(p -> p.apply(baseState))
.subscribe();
As a complete demonstration, swapping out your State for a String to keep it shorter:
interface Policy {
Mono<String> apply(String currentState);
public static Policy combine(Policy p1, Policy p2) {
return s -> p1.apply(s).flatMap(p2::apply);
}
}
public static void main(String[] args) {
List<Policy> policies = new ArrayList<>();
policies.add(x -> Mono.just("blah " + x));
policies.add(x -> Mono.just("foo " + x));
String baseState = "bar";
Flux.fromIterable(policies)
.reduce(Policy::combine)
.flatMap(p -> p.apply(baseState))
.subscribe(System.out::println); //Prints "foo blah bar"
}
If I understand the problem correctly, then the most simple solution is to use a regular for loop:
Flux<State> flux = Flux.just(baseState);
for (Policy policy : policies)
{
flux = flux.flatMap(policy::apply);
}
flux.subscribe();
Also, note that if you have just a single baseSate you can use Mono instead of Flux.
UPDATE:
If you are concerned about breaking the flow, you can extract the for loop into a method and apply it via transform operator:
Flux.just(baseState)
.transform(this::applyPolicies)
.subscribe();
private Publisher<State> applyPolicies(Flux<State> originalFlux)
{
Flux<State> newFlux = originalFlux;
for (Policy policy : policies)
{
newFlux = newFlux.flatMap(policy::apply);
}
return newFlux;
}

Is this loginRequired(f)() the way to handle login required functions in dart?

I am new to Dart programming. I am trying to figure out what is the proper way (what everyone will do) to handle/guard those functions which are login required. The following is my first trial:
$ vim login_sample.dart:
var isLoggedIn;
class LoginRequiredException implements Exception {
String cause;
LoginRequiredException(this.cause);
}
Function loginRequired(Function f) {
if (!isLoggedIn) {
throw new LoginRequiredException("Login is reuiqred.");
}
return f;
}
void secretPrint() {
print("This is a secret");
}
void main(List<String> args) {
if (args.length != 1) return null;
isLoggedIn = (args[0] == '1') ? true : false;
try {
loginRequired(secretPrint)();
} on LoginRequiredException {
print("Login is required!");
}
}
then, run it with $ dart login_sample.dart 1 and $ dart login_sample.dart 2.
I am wondering if this is the recommended way to guard login required functions or not.
Thank you very much for your help.
Edited:
My question is more about general programming skills in Dart than how to use a plugin. In python, I just need to add #login_required decorator in the front of the function to protect it. I am wondering if this decorator function way is recommended in dart or not.
PS: All firebase/google/twitter/facebook etc... are blocked in my country.
I like the functional approach. I'd only avoid using globals, you can wrap it in a Context so you can mock then for tests and use Futures as Monads: https://dartpad.dartlang.org/ac24a5659b893e8614f3c29a8006a6cc
Passing the function is not buying much value. In a typical larger Dart project using a framework there will be some way to guard at a higher level than a function - such as an entire page or component/widget.
If you do want to guard at a per-function level you first need to decide with it should be the function or the call site that decides what needs to be guarded. In your example it is the call site making the decision. After that decision you can implement a throwIfNotAuthenticated and add a call at either the definition or call site.
void throwIfNotAuthenticated() {
if (!userIsAuthenticated) {
throw new LoginRequiredException();
}
}
// Function decides authentication is required:
void secretPrint() {
throwIfNotAuthenticated();
print('This is a secret');
}
// Call site decides authentication is required:
void main() {
// do stuff...
throwIfNotAuthenticated();
anotherSecreteMethod();
}

Emit detailed signal in Vala

I have the following original C code:
static guint event_signal_id;
struct _MatrixClientIface {
void (*event)(MatrixClient *client, const gchar *room_id, const JsonNode *raw_event, MatrixEvent *event);
}
static void
matrix_client_default_init(MatrixClientIface *iface)
{
event_signal_id = g_signal_new("event",
MATRIX_TYPE_CLIENT,
G_SIGNAL_RUN_LAST | G_SIGNAL_DETAILED,
G_STRUCT_OFFSET(MatrixClientIface, event),
NULL, NULL, _matrix_marshal_VOID__STRING_BOXED_OBJECT,
G_TYPE_NONE, 3,
G_TYPE_STRING,
JSON_TYPE_NODE,
MATRIX_TYPE_EVENT);
}
void
matrix_client_incoming_event(MatrixClient *client,
const gchar *room_id,
const JsonNode *raw_event,
MatrixEvent *event)
{
GQuark equark;
equark = g_type_qname(G_TYPE_FROM_INSTANCE(event));
g_signal_emit(client,
event_signal_id, equark,
room_id, raw_event, event);
}
Now I want to transform this to Vala; however, I cannot find a tutorial about emitting signals (defining them appears in tutorials many times). I found GLib.Signal.emit() in the docs, but there is no example there on how to get a GLib.Signal object.
My current interface looks like this:
namespace Matrix {
public interface Client : GLib.Object {
public virtual signal void
#event(string? room_id, Json.Node raw_event, Matrix.Event matrix_event)
{
Quark equark = #event.get_type().qname();
#event.emit(room_id, raw_event, matrix_event);
}
}
This obviously doesn’t work. The questions are:
Am I defining the emitter as I should, at all?
If so, how do I actually emit the event signal with equark as a detail?
I cannot find a tutorial about emitting signals (defining them appears in tutorials many times).
I suspect that you actually have, but missed the significance because emitting signals is so easy. For example, see the Signals section of the Vala Tutorial; emitting is shown (t1.sig_1(5);).
I found GLib.Signal.emit() in the docs, but there is no example there on how to get a GLib.Signal object.
GLib.Signal.emit is the low-level way of emitting signals, and there is basically no reason to ever use it from Vala.
Note that the first argument to emit is a pointer. There are few exceptions (most notably libxml), but for the most part if you ever encounter a pointer in Vala, you're doing something wrong. For the most part that also holds true for quarks; virtually everything you would use quarks for has syntax support in Vala.
Am I defining the emitter as I should, at all?
Nope. For starters, get rid of the method body and the "virtual".
If so, how do I actually emit the event signal with equark as a detail?
All you need is something like this:
namespace Matrix {
public class Client : GLib.Object {
[Signal (detailed = true)]
public signal void #event (string? room_id, Json.Node raw_event, Matrix.Event evt);
}
To emit it, you can do something like:
client.#event[evt.get_type().name()](room_id, raw_event, evt);
To connect:
client.#event[evt.get_type().name()].connect((room_id, raw_event, evt) => {
// Signal with a matching detail received
});
client.#event.connect((room_id, raw_event, evt) => {
// Signal received with *any* value for the detail
});

Global [BeforeScenario], [AfterScenario] steps in SpecFlow

We're trying to implement global hooks on our specflow tests and are not entirely sure how [BeforeScenario] and [AfterScenario] attributed methods work.
The way I've seen it done, those attributes are always defined in a class containing specific steps used in a few scenarios.
Can they go somewhere so they apply to all scenarios? Or does attributing the methods with [BeforeScenario] and [AfterScenario] cause them to be run for all scenarios, regardless of where they're actually placed?
Hmm... From what I knew and according to the documentation these hooks are always global, i.e. from http://www.specflow.org/documentation/hooks/
Hooks
The hooks (event bindings) can be used to perform additional automation logic on specific events, like before the execution of a scenario.
The hooks are global but can be restricted to run only for features or scenarios with a specific tag (see below). The execution order of hooks for the same event is undefined.
In fact by producing a small demo project with the following
[Binding]
public class Unrelated
{
[BeforeScenario]
public void WillBeCalledIfGlobal()
{
Console.WriteLine("I'm global");
}
}
[Binding]
public class JustTheTest
{
[Given("nothing")]
public void GivenNothing()
{
// Don't do anything
}
}
Then the test specification of
As a developer
In order to understand how BeforeSpecifcation works
I want to know what the following does
Scenario: See if BeforeSpecifcation hook gets called
Given nothing
The get the output
I'm global
Given nothing
-> done: JustTheTest.GivenNothing() (0.0s)
So it really does look as if the documentation is correct, and you should use tagging to control if the BeforeScenario \ AfterScenario are run before or after your scenario.
There is also a good example of how tagging works here -> Feature-scoped step definitions with SpecFlow?
Yes, you can create global BeforeScenario and AfterScenario methods, but in practice I find that this is not desirable, as usually the same before and after steps do not apply to all steps in a test project.
Instead I create a base class for my step definitions, which would have the BeforeScenario and AfterScenarios methods I'd like applied to all of my scenarios e.g.
public class BaseStepDefinitions
{
[BeforeScenario]
public void BeforeScenario()
{
// BeforeScenario code
}
[AfterScenario]
public void AfterScenario()
{
// AfterScenario code
}
}
Note that I have not used the Binding attribute on this class. If you do include it then the BeforeScenario and AfterScenario steps would be global.
I then derive my step definion classes from this base step definition class, so that they will have the Before and After scenario methods e.g.
[Binding]
public class SpecFlowFeature1Steps : BaseStepDefinitions
{
[Given(#"I have entered (.*) into the calculator")]
public void GivenIHaveEnteredIntoTheCalculator(int inputValue)
{
ScenarioContext.Current.Pending();
}
[When(#"I press add")]
public void WhenIPressAdd()
{
ScenarioContext.Current.Pending();
}
[Then(#"the result should be (.*) on the screen")]
public void ThenTheResultShouldBeOnTheScreen(int expectedResult)
{
ScenarioContext.Current.Pending();
}
}
Whilst this approach is not global, by making all StepDefinitions derive from a BaseStepDefinition class we achieve the same outcome.
It also gives more control i.e. if you don't want the BeforeScenario or AfterScenario binding then don't derive from the base steps.
Sorry this doesn't work. As soon as you start using multiple Binding classes you end up with multiple calls. For example if I extend the example above to split the bindings into three classes,
[Binding]
public class SpecFlowFeature1Steps : BaseStepDefinitions
{
[Given(#"I have entered (.*) into the calculator")]
public void GivenIHaveEnteredIntoTheCalculator(int inputValue)
{
//ScenarioContext.Current.Pending();
}
}
[Binding]
public class SpecFlowFeature2Steps : BaseStepDefinitions
{
[When(#"I press add")]
public void WhenIPressAdd()
{
//ScenarioContext.Current.Pending();
}
}
[Binding]
public class SpecFlowFeature3Steps : BaseStepDefinitions
{
[Then(#"the result should be (.*) on the screen")]
public void ThenTheResultShouldBeOnTheScreen(int expectedResult)
{
//ScenarioContext.Current.Pending();
}
}
public class BaseStepDefinitions
{
[BeforeScenario]
public void BeforeScenario()
{
// BeforeScenario code
Console.WriteLine("Before. [Called from "+ this.GetType().Name+"]");
}
[AfterScenario]
public void AfterScenario()
{
// AfterScenario code
Console.WriteLine("After. [Called from " + this.GetType().Name + "]");
}
}
Then when I run it, the output is
Before. [Called from SpecFlowFeature1Steps]
Before. [Called from SpecFlowFeature2Steps]
Before. [Called from SpecFlowFeature3Steps]
Given I have entered 50 into the calculator
-> done: SpecFlowFeature1Steps.GivenIHaveEnteredIntoTheCalculator(50) (0.0s)
And I have entered 70 into the calculator
-> done: SpecFlowFeature1Steps.GivenIHaveEnteredIntoTheCalculator(70) (0.0s)
When I press add
-> done: SpecFlowFeature2Steps.WhenIPressAdd() (0.0s)
Then the result should be 120 on the screen
-> done: SpecFlowFeature3Steps.ThenTheResultShouldBeOnTheScreen(120) (0.0s)
After. [Called from SpecFlowFeature1Steps]
After. [Called from SpecFlowFeature2Steps]
After. [Called from SpecFlowFeature3Steps]
What you can do, in order to control the 'BeforeScenario' and 'AfterScenario' is use tags. This gives you the control of which Scenario's should run which before and after block. Your scenario would look like this:
#GoogleChrome
Scenario: Clicking on a button
Given the user is on some page
When the user clicks a button
Then something should happen
Here you could let the 'BeforeScenario' start a browser session in Google Chrome for you, and implement similar tags for different browsers. Your 'BeforeScenario' would look like this:
[Binding]
class Browser
{
[BeforeScenario("GoogleChrome")]
public static void BeforeChromeScenario()
{
// Start Browser session and do stuff
}
[AfterScenario("GoogleChrome")]
public static void AfterChromeScenario()
{
// Close the scenario properly
}
I think using tags is a nice way of keeping your scenario's clean and give you the extra functionality to let you control what each scenario should do.

Resources