How do I consume a real web service from a BPEL Process? - wsdl

I've been doing some research on BPEL for about two weeks now and still don't quite get it.
I have deployed the HelloWorld sample in ODE and have also managed to deploy this other one.
My intention was to do something like the second example but with my own real WS deployed and working.
I'm now at the point of having a process with no errors and correctly deployed in ODE with the following structure:
I have started the project from a service definition importing my Multiply.wsdl. The Designer has composed the import tag into the MuktiplyProcessArtifacts.wsdl next to the PartnerLinkTypes all automagically so I assume all namespaces, etc are ok.
There is a few concepts I misundertand in order to make all of this work:
In my original Multiply.wsdl I have
soap:address
location="http://localhost:8080/WS-multiply/multiply"
but ODE tells me my soap:address must have the form host.port/ode/processes..
This doesn't sound reasonable to me since my WS could be implemented anywhere outside my ODE_HOME.
The second example I mentioned before explains how the Designer presumably creates a "Caller.wsdl", which in fact has the function I would desire, which is to implement a "wrapper" WSDL, providing the BPEL process with entry and exit points. The issue is the Designer does not generate that interface. Am I supposed to create it myself? Do I have to create it at all?
If that 3rd wsdl is really needed, is it the one I would have to call if I wanted to test the whole process?

It looks like your partner WSDL is associated to a myrole of a partnerlink. Partnerlinks and partnerlink types are a concept in BPEL that is used to define dual interfaces in a sense that if a partner A wants to communicate with a BPEL process as a buyer, it needs to provide a certain set of functionality that the process can use for further communications (i.e. sending a shipment confirmation to the buyer). Thus, a partnerlink maintains two roles, the myRole is the portType (aka interface) that the process itself provides, the partnerRole refers to a portType the process expects to be implemented by the partner. MyRoles must be of course implemented by the BPEL process and thus needs to have an endpoint that is exposed by the BPEL engine. PartnerRoles can be bound to arbitrary endpoints. This happens in the deployment descriptor, which is the deploy.xml in ODE.
I guess you can fix your process by assigning your partner WSDL to a partner role.

I hope http://thiliniishaka.blogspot.com/2012/10/develop-ws-bpel-process-using-wso2.html
and http://thiliniishaka.blogspot.com/2012/10/part-2-developing-ws-bpel-process-using.html may help you to resolve aforementioned queries.
Thanks
Thilini

Its mandatory to have Ode.war deployed at tomcat server, tomcat create a path like the picture, you need to config your endpoit with the complete path /ode/processes
c:\apache-tomcat-7.0.55\webapps\ode\WEB-INF\processes\BPEL_WS\

Related

I need some kind of decission module in flowground

I'm trying to send different message cards to multiple teams channels.
I have already created a webhook (telekom/webhook) for this which gives me the right variables via json.
There are four department receiver channels (telekom/rest-api-component) which are also configured to send pre-formatted teams message cards with the variables they have submitted.
Currently this happens to all channels at the same time. In between I would need an "action" in which I can decide which of the channels is served based on the input values. Unfortunately I don't find anything suitable due to the variety of the apis. Do you know how I could realize this ? So something like if value department = Backoffice then (Teams "Account Management") action.
In order to be able to talk with the different applications from Office 365 I wanted to use the Microsoft Graph api which is now available for some time. I couldn't find them in Flowground. Are you planning to include this module ?
For the implementation with Office365 flows this would be absolutely necessary for me.
I want to come back to this question: The CBR is a good choice for executing decisions indeed. But is is this the best solution in every situation? I do not think so.
Assume the following task:
Depending on an input parameter test you want to fire a request to different web services (WS1:google.de and WS2:bing.de)
Solution 1: You realize the requests with dedicated connectors for WS1 and WS2.
In this case you need the CBR in front of WS1 connector and WS2 connector to decide, what connector has to been used next.
Solution 2: You are able to realize both requests with REST-API connector. In this case you can use a JSONATA expression as URL mapping, e.g.
(test="google") ? "http://google.de" : http://bing.de
By using JSONATA expressions every connector has (limited) capability for executing decisions.
Solution 2 has a big advantage when you are using realtime flows. In this case you are able to reduce the number of connectors they are needed for running the flow and (very important from a cost perspective ) the number of permanently claimed token by this flow.
For reducing the complexity of JSONATA expressions (e.g. when you add further search engines) and for separation of individual configuration items you can use the configuration connector (we can discuss this in a separate thread if needed).
Solution 1 is the choice without alternative when you have to decide between different structures/connectors they need to be executed within a flow.
Please try the Content-Based-Router: https://doc.flowground.net/guides/content-based-router.html, it is available on the Connector Catalog.

How to peek at message while dependencies are being built?

I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).

Set an Authorization on process definitions in Camunda BPM

Currently we evaluate Camunda BPM as a possible Open Source BPM framework. One important use case is that we need to manage which user is allowed to see and start which process in the Camunda tasklist. According to the official documentation: http://docs.camunda.org/latest/guides/user-guide/#process-engine-authorization-service and this post here: https://groups.google.com/forum/#!topic/camunda-bpm-users/EjY8sxycNAQ
it is not possible to define access rights on process definitions. The problem is, that the post was not updated since last year.
Therefore, is it possible to define Authorizations on process definitions?
Best regards
Ben
You can define a possible starter group on the process definition, though not via modeler but via xml directly:
<bpmn2:process id="..." name="..." isExecutable="true">
<bpmn2:extensionElements>
<activiti:potentialStarter><![CDATA[
]]><resourceAssignmentExpression><![CDATA[
]]><formalExpression>group(YOUR_PROCESS_STARTER_GROUP) </formalExpression><![CDATA[
]]></resourceAssignmentExpression><![CDATA[
]]></activiti:potentialStarter>
</bpmn2:extensionElements>
...
and then query it via API:
repositoryService.createProcessDefinitionQuery().startableByUser(userId).latestVersion().list();
Note: we are not using the camunda tasklist, we wrote our own. So I cannot tell if this is going to work out of the box.

simple bpel workflow : select query return multiple rows

I have to implement a simple bpel workflow, which only executes a select query on database. I have been able to create a Data Service wsdl file. Its flow is attached along with this question as an image file. Please have a look at that first. If you see the image, I some how ended up making a complex structure for parameter "Name" (auto-magically generated wsdl code by wso2 Data Service Server). It has a complex element called "Customer" which has 2 string values "Name" and "nid". I have also copied the wsdl file in case you need to see it. Here: http://pastebin.com/QTKZbdzn
I believe I am not sending any input parameters, while when I try to directly invoke the Data Service without Receive module, it gives an error, saying "No Start activity has been defined for the process".
Anyone who has implemented a similar BPEL workflow for the Data Service, please let me know. The data service works fine! I have tested it separately. thanks!
UPDATE
I ended up making a BPM like this:
I have to change the DSS also, so that I provide some input to the BPM. Like rather than "select * from customer" I am now implementing "select * from customer where nid = ?". It proved to be pretty succesfull. Thanks for helping me out joergl & vimesh. But if you still figure out how query with no where clause would work, update it here.
I have made a bpel flow with data service.
The very first thing we need to do is adding a receive element in the bpel flow. Actually it let us to send a request to the bps and at the same time bps makes a new instance with the request.
So then you can do whatever you wish, like invoking ESB proxies, DSS services, etc. while invoking the external service you can parameters to that request. Even though you are not sending any input parameters to the DSS service you should make a request to the DSS inside BPS in the correct format.(I mean the body part)
You can simply go ahead with the bpel samples available in this and then better start with DSS integrations.

CXF client loads wsdl for both service and port?

In a java web app, I need to call a remote soap service, and I'm trying to use a CXF 2.5.0-generated client. The soap service is provided by a particular ERP vendor, and its wsdl is monstrous, thousands of types, dozens of xsd imports, etc. wsdl2java generates the client ok, thanks to the -autoNameResolution flag. But at runtime it retrieves the remote wsdl twice, once when I create the service object, and again when I create a port object.
MyService_Service myService = new MyService_Service(giantWsdlUrl); // fetches giantWsdl
MyService myPort = myService.getMyServicePort(); // fetches giantWsdl again
Why is that? I can understand retrieving it when creating myService, you want to see that it matches the client I'm currently using, or let a runtime wsdl location dictate the endpoint address, etc. But I don't understand why asking for the port would reload everything it just went out on the wire for. Am I missing something?
Since this is in a web application, and I can't be sure that myPort is threadsafe, then I'd have to create a port for each thread, except that's way too slow, 6 to 8 seconds thanks to the monstrous wsdl. Or add my own pooling, create a bunch in advance, and do check-outs and check-ins. Yuck.
For the record, the JaxWsProxyFactoryBean creation route does not ever fetch the wsdl, and that's good for my situation. It still takes a long time on the first create(), then about a quarter second on subsequent create()s, and even that's less than desirable. And I dunno... it sorta feels like I'm under the hood hotwiring the thing rather than turning the key. :)
Well, you have actually answered the question yourself. Each time you invoke service.getPort() the WSDL is loaded from remote site and parsed. JaxWsProxyFactoryBean goes absolutely the same way, but once the proxy is obtained it is re-used for further invocations. That is why the 1st run is slow (because of "warming up"), but subsequent are fast.
And yes, JaxWsProxyFactoryBean is not thread-safe. Pooling client proxies is an option, but unfortunately will eat a lot of memory, as JAX-WS runtime model is not shared among client proxies; synchronization is perhaps better way to follow.

Resources