How to configure pub sub for multiple subscribers with Rhino Service Bus? - rhino

I am trying to set up pub-sub between 1 publisher and multiple subscribers using Rhino Service Bus. However, all I ever seem to get is competing consumers (where messges are distributed between 1 consumer or the other, but not sent to both).
My current publisher configuration looks like this (Note: I'm using the new OnewayRhinoServiceBusFacility so I don't need to define a bus element in the sender)
<facility id="rhino.esb.sender" >
<messages>
<add name="My.Messages.Namespace" endpoint="msmq://localhost/my.queue"/>
</messages>
</facility>
My current subscriber configuration looks like this:
<facility id="rhino.esb.receiver" >
<bus threadCount="1" numberOfRetries="5" endpoint="msmq://localhost/my.queue" DisableAutoQueueCreation="false" />
<messages>
<add name="My.Messages.Namespace" endpoint="msmq://localhost/my.queue" />
</messages>
</facility>
I have 2 simple command line apps which start up publisher and subscriber. I just copy and paste subscriber bin to set up 2 subscribers. My message handler looks like this:
public class DummyReceiver : ConsumerOf<MyMessageType>
{
public void Consume(MyMessageType message)
{
// ......
}
}
Any ideas? Cheers

Doh! Was using Send instead of Publish in my producer code. Had copied it from another example and forgot to change.
So, for reference my publisher code is like this:
var container = new WindsorContainer(new XmlInterpreter("RhinoEsbSettings.xml"));
RhinoServiceBusFacility facility = new RhinoServiceBusFacility();
container.Kernel.AddFacility("rhino.esb", facility);
var bus = container.Resolve<IStartableServiceBus>();
bus.Start();
MyMessageType msg = new ...
bus.Publish(msg);
And my consumer startup code is like this:
var container = new WindsorContainer(new XmlInterpreter("RhinoEsbSettings.xml"));
container.Register(Component.For<ConsumerOf<MyMessageType>>().ImplementedBy<DummyReceiver>().LifeStyle.Transient.Named("Consumer"));
RhinoServiceBusFacility facility = new RhinoServiceBusFacility();
container.Kernel.AddFacility("rhino.esb", facility);
var bus = container.Resolve<IStartableServiceBus>();
bus.Start();

Related

anylogic agent communication and message sending

In my model, I have some agents;
"Demand" agent,
"EnergyProducer1" agent
"EnergyProducer2" agent.
When my hourly energy demands are created in the Main agent with a function, the priority for satisfying this demand is belongs to "EnergyProducer1" agent. In this agent, I have a function that calculate energy production based on some situtations. The some part of the inside of this function is following;
**" if (statechartA.isStateActive(Operating.busy)) && ( main.heatLoadDemandPerHour >= heatPowerNominal) {
producedHeatPower = heatPowerNominal;
naturalGasConsumptionA = naturalGasConsumptionNominal;
send("boilerWorking",boiler);
} else ..... "**
Here my question is related to 4th line of the code. If my agent1 fails to satisfy the hourly demand, I have to say agent2 that " to satisfy rest of demand". If I send this message to agent2, its statechart will be active and the function of agent2 will be working. My question is that this all situations will be realized at the same hour ??? İf it is not, is accessing variables and parameters of other agent2 more appropiaote way???
I hope I could explain my problem.
thanks for your help in advance...
**Edited question...
As a general comment on your question, within AnyLogic environment sending messages is alway preferable to directly accessing variable and parameters of another agent.
Specifically in the example presented the send() function will schedule message delivery the next instance after the completion of the current function.
Update: A message in AnyLogic can be any Java class. Sending strings such as "boilerWorking" used in the example is good for general control, however if more information needs to be shared (such as a double value) then it is good practice to create a new Java class (let's call is ModelMessage and follow these instructions) with at least two properties msgStr and msgVal. With this new class sending a message changes from this:
...
send("boilerWorking", boiler);
...
to this:
...
send(new ModelMessage("boilerWorking",42.0), boiler);
...
and firing transitions in the statechart has to be changed to use if expression is true with expression being msg.msgString == "boilerWorking".
More information about Agent communication is available here.

How to implement Spock Parameterized test best practice?

I have a test spec, which can be run with a unique data set. The best practice for this is a bit unclear. How should the code below be modified to run with:
#Stepwise
class marktest extends ShopBootStrap {
private boolean useProductionUrl = false
def "Can Access Shop DevLogin page"() {
// DevStartLogin: 'New OE Start' button click
setup:
println System.getProperty("webdriver.chrome.driver")
when:
to ShopDevStartPage
then:
at ShopDevStartPage
}
def "on start enrollment, select 'United States' and click 'continue' button"() {
when: "enter Sponsor ID and click New OE Start"
to ShopDevStartPage
sponsorId.value(ShopDevStartPage.SPONSORID)
NewOEButton.click()
then:
waitFor { NewEnrollmentPage }
}
}
1) data set 1
private boolean useProductionUrl = false
protocol = System.getProperty("protocol") ?: "https"
baseDomain = System.getProperty("base.url") ?: "beta.com"
testPassword = System.getProperty("test.password") ?: "dontyouwish"
2) data set 2
private boolean useProductionUrl = true
protocol = System.getProperty("protocol") ?: "https"
baseDomain = System.getProperty("base.url") ?: "production.com"
testPassword = System.getProperty("test.password") ?: "dywyk"
Generally, to make a test depend on data, use a where block, possibly together with the #Unroll annotation.
However, your case is simply not the best example for a data driven test.
The baseDomain and protocol should rather be set in the GebConfig.groovy, similar to the snippets you provided.
Refer to this section in the Book of Geb, as that is what you are using.
Simple example (in GebConfig.groovy):
environments {
production {
baseUrl = "https://production.com"
}
beta {
baseUrl = "https://beta.com"
}
}
If done this way, your individual tests does not need to care about the environment, as this is already built into Geb.
For example when navigating to pages, their base URL is automatically set.
You did not provide that part of the code in your example (how the pages are defined), so I cannot help you with that directly.
Now, in your case, as far as the "password" is concerned, you could read that from an environment variable or system property, that you set close to where you configure Geb with geb.env or geb.build.baseUrl system properties.
Note I am just considering this for practial reasons without any regards towards secrecy of the password.
You would pick up the variable in the page class that uses it.
Example code in page class:
static content = {
//...
passwordInput = { $('input[type="password"]') }
//...
}
void enterPassword() {
passwordInput.value(System.getProperty('test.password'))
}
To make this work, you need to start your test with the system properties set to correct values.
E.g. if starting directly from the command line, you would add parameters -Dgeb.env=beta -Dtest.password=dontyouwish.
If running from a Gradle task, you would need to add appropriate systemProperty keys and values to that task.
If running from IDE, refer to your IDEs documentation on how to set Java system properties when running a program.

Apache Beam PubSubIO with GroupByKey

I'm trying with Apache Beam 2.1.0 to consume simple data (key,value) from google PubSub and group by key to be able to treat batches of data.
With default trigger my code after "GroupByKey" never fires (I waited 30min).
If I defined custom trigger, code is executed but I would like to understand why default trigger is never fired. I tried to define my own timestamp with "withTimestampLabel" but same issue. I tried to change duration of windows but same issue too (1second, 10seconds, 30seconds etc).
I used command line for this test to insert data
gcloud beta pubsub topics publish test A,1
gcloud beta pubsub topics publish test A,2
gcloud beta pubsub topics publish test B,1
gcloud beta pubsub topics publish test B,2
From documentation it says that we can do one or the other but not necessarily both
If you are using unbounded PCollections, you must use either
non-global windowing OR an aggregation trigger in order to perform a
GroupByKey or CoGroupByKey
It looks to be similar to
Consuming unbounded data in windows with default trigger
Scio: groupByKey doesn't work when using Pub/Sub as collection source
My code
static class Compute extends DoFn<KV<String, Iterable<Integer>>, Void> {
#ProcessElement
public void processElement(ProcessContext c) {
// Code never fires
System.out.println("KEY:" + c.element().getKey());
System.out.println("NB:" + c.element().getValue().spliterator().getExactSizeIfKnown());
}
}
public static void main(String[] args) {
Pipeline p = Pipeline.create(PipelineOptionsFactory.create());
p.apply(PubsubIO.readStrings().fromSubscription("projects/" + args[0] + "/subscriptions/test"))
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(1))))
.apply(
MapElements
.into(TypeDescriptors.kvs(TypeDescriptors.strings(), TypeDescriptors.integers()))
.via((String row) -> {
String[] parts = row.split(",");
System.out.println(Arrays.toString(parts)); // Code fires
return KV.of(parts[0], Integer.parseInt(parts[1]));
})
)
.apply(GroupByKey.create())
.apply(ParDo.of(new Compute()));
p.run();
}

How do I setCoders when doing an end-to-end test when my coder requires a KvCoder which requires a UnionCoder

I'm working with the Java cloud dataflow SDK and I'm working on some end to end tests.
#Test
public void testEndtoEnd() throws Exception {
TupleTag<Entity> tag1 = aTagFromElsewhere1;
TupleTag<Entity> tag2 = aTagFromElsewhere2;
TupleTagList tags = TupleTagList.of(tag1).and(tag2);
CoGbkResultSchema schema = new CoGbkResultSchema(tags);
JoinEntities myDoFn = new JoinEntities();
DoFnTester<KV<String, CoGbkResult>, Entity> fnTester = DoFnTester.of(myDoFn);
List<RawUnionValue> rawUnionValues = new ArrayList<RawUnionValue>();
Date validThruDate = new Date(System.currentTimeMillis() + 5000L);
rawUnionValues.add(new RawUnionValue(0, aValidEntity1)));
rawUnionValues.add(new RawUnionValue(1, aValidEntity2));
CoGbkResult result = new CoGbkResult(schema, rawUnionValues);
KV<String, CoGbkResult> aCoGbkPair = KV.of("Bleh", result);
Pipeline p = TestPipeline.create();
PCollection<KV<String, CoGbkResult>> input = p.apply(Create.of(aCoGbkPair))
.setCoder(KvCoder.of(StringUtf8Coder.of(), CoGbkResultCoder.of(UnionCoder, schema)));
PCollection<String> output = input.apply(new FormatEntitiesForTsv());
DataflowAssert.that(output).containsInAnyOrder(/**TODO: Create test data**/);
}
The problem I'm having is that within the setCoder, I am using a KvCoder.of() which requires a UnionCoder. I'm not sure how to get this UnionCoder and I've looked at the class for it and it isn't accessible.
How do I work around getting this? (Alternatively, if there is a better way to go about getting the input, I am all ears).
Thanks and cheers :)
Indeed, it's an oversight in the SDK - UnionCoder ought to be public, and it was made public in the Beam SDK some time ago. Your best option would be to either build your own version of the Dataflow SDK with this change, or wait for us to make the change in the github repo and wait for the next Maven release (I'll send a pull request and update this answer).

What URL will get the status code (result) of the last Jenkins job?

I am wondering if anyone knows what URL is required (as a GET or POST) that will get the status code (result) of the last Jenkins job (when the build# is not known by the client calling the GET request)? I just want to be able to detect if the result was RED or GREEN/BLUE .
I have this code sample, but I need to adjust it so that it works for Jenkins, for this purpose (as stated above):
public class Main {
public static void main(String[] args) throws Exception {
URL url = new URL("http://localhost/jenkins/api/xml");
Document dom = new SAXReader().read(url);
for( Element job : (List<Element>)dom.getRootElement().elements("job")) {
System.out.println(String.format("Name:%s\tStatus:%s",
job.elementText("name"), job.elementText("color")));
}
}
}
Once I figure out the answer, I will share a full example of how I used it. I want to create a job that collects information on a test suite of 20+ jobs and reports on all of them with an email.
You can use the symbolic descriptor lastBuild:
http://localhost/jenkins/job/<jobName>/lastBuild/api/xml
The result element contains a string describing the outcome of the build.

Resources