Bonita BPM Components - business-process-management

I am completely a newbie in Bonita and BPM in general, in my introductory video lessons, I have so far learned about data models, the UI among others. So what I wanted is to know the components, I don't know what they are called but I call them components, I mean contracts, data models, and the likes, what are others that are involved in BPM because I don't even know how to google that.
Any help will be highly appreciated, even a link will be very useful.

Bonita BPM includes three main components:
Studio: the tool to design your processes. Also include an embedded test environment.
UI Designer: a web tool included in the Studio that let you create end user web interfaces.
Portal: the end user web interface to interact with processes. Also used by the administrator to deploy process, configure...
Engine: the technical component responsible for process execution.
You also have several important concepts in Bonita BPM:
Process definition: this is the model of the process with tasks, gateway... You create it using Bonita BPM Studio.
Business Data Model: this is a model of your data that you can create using Bonita BPM Studio. This model will generate a set of Java classes that represents your business data and also associated code to save and retrieve all those data from a database. Data define in this model are shared by all process definitions.
In a process definition you can declare business variables. They are actually reference to the business data store in the database. You can instantiate them using default value of your business variables. You can update them using operations on tasks.
Contract define the data expected by the Engine in order to instantiate a process or execute a task. End user will usually submit a form to start a process or execute a task. Contract define which data is expected from the form submission.
Forms are created using UI Designer. A form is actually a set of widget bind to form variables. The forms variables can be initialized using REST API call or by user input in the widget. REST API call can be done for example to get business variables values or access external system such as a database. Submit button is also associated to a form variable. This form variable must contains all information required by the contract.
Connectors are part of the process definition and let you interact with third party system when the process is executed. For example it can call a web service to decide if a specific path need to be taken or not.
I would recommend to checkout getting started tutorial in the documentation. You can also watch BPM Camp videos.

Related

Odata Open Type and Odata Client Library

I have a business requirement where I need to expose set of custom properties defined by user and since this is user configuration, I cannot go away by creating classes. Therefore I need to opt for open types feature in Odata.
Q1. Is there any sample implementation out there on how I can persist the data to database and also support the querying capabilities on open types?
Q2. One issue I noticed is currently client library is not correctly handling open types and can only be achieved by partial classes that means user has to know the custom properties up front so that they can hand craft partial classes which is not what I want to do. Instead better approach would have been to support open types on client side by dynamic properties. Any pointers on how the client side experience can be optimized.
About query capabilities on open types, order and filter is supported in v5.5(will be released by the end of this month), query the value of dynamic properties, you can follow this pull request, part of this is in master branch now.
About persist the data to database, I think you can consider non-relational database, which can be a good choice for your open type data.
About Q2, achieve by dynamic properties is not implement in client library, maybe you can open an issue in github for us.

MVC web framework that allows end users to define models at runtime

It seems this can be hacked into Django, but I'd rather prefer a framework that has better support for end user defined models.
Basically, I want the users of my app/website to be able to do at the runtime of the application what I do at compile time when writing the Model code: specify models that generate/modify a database schema. Obviously I cannot let the users of the webApp modify the code in models.py, so there has to be another way. Concurrency shouldn't be an issue, since each user-defined model would belong to only one user.
I don't mind using any programming language (Python, Haskell, JavaScript etc.) or any specific database SQL, NoSQL, whatever. Rails/Django freed me from writing a lot of repetitive code, now I simply want that functionality of modifying the model also at runtime (and preferably the corresponding views and controllers). If there is a good framework that rids me of writing all that code then I'll use it.
If there's no framework supporting it natively, does someone know a framework that at least makes it easy?
Portofino version 3 (http://www.manydesigns.com/en/portofino/portofino3) allows a modeler user to create data models interactively using a web interface called the "upstairs level". The system automatically generates a user interface (CRUD, charts, workflows) based on the model definition, without recompiling and basically in real-time with model changes.
You can check the reference manual to see what kind of models are supported:
http://www.manydesigns.com/en/portofino/portofino3/3_1_x/reference-manual
Currently Portofino 3 is an end-of-life version. The newer version 4 (http://www.manydesigns.com/en/portofino) is a significant re-write that currently does not support online editing of the data model as version 3 did, but keeps the same principle of making the application editable (through admin/configuration pages) and customizable (using Groovy) online without recompiling or restarting the server.
For data-model changes and db refactoring, Portofino 4 relies on Liquibase:
http://www.liquibase.org/

Where to put Entity Framework Data Model in MVC application? Specific example

First I want to refer to this post:
Where to put Entity Framework Data Model in MVC application?
My edmx will have 7-10 tables in it. Not more.
The problem is I have to build my model which I´m working with out of [lets say] 4 tables.
So I´m asking myself: Are these tables real model representations and would it be correct to put the edmx file in the "Models" folder and how should I name this CONTAINER of models?
Or are 10 tables enough to create a new project? How to call the project? .DataAccess? How to name the edmx file in it?
I don´t have that much experience with MVC and EF and am trying to figure out a best practice there.
Update: This post tells me not to put it in the Models folder: "The model should be decoupled from the backend data store technology as much as possible."
Personally my MVC projects (regardless of size) consist of the following as a minimum:
Data
Logic
Site
This structure seems to work pretty well as it separates business logic from storage and display.
You definitally don't want to put the EDMX in the models folder as that is reserved for view models. Best practice says that view models should be entirely disconnected from your storage entities.
In terms of naming the EDMX i normally name it after the short name of the project, the more important thing is to get the namespace right for the EDMX so your models sit in the correct namespace location.
My response is based on Silverlight and I understand it's a bit out of context because you are asking from MVC view point. But please allow me to illustrate where I put my EDMX
First project solution
-Widgets. These are multiple UI projects with multiple XAML pages
-UI logic is heavy orchestrating every widget and XAML pages in one main user interface
-View-Models. These are almost equivalent to controllers in MVC. I use XAML to directly bind to View-Models. Example QuotationItemModel.vb and xyz.vb and such. Multiple XAML pages may share 1 VM.
-XAML pages suppose to use command bindings as per implementating View-Models. Example button click is routed to VM. I didn't achieve this because the UI coordination logic (from another UI architect) was interfering with my hooking to delegate command
(of CanExecute, Execute Func(Of Object, Boolean) Action(Of Object) causing a stack overflow in first level widgets' click event.)
-Model. There is but one function here. Her job hooks a delegate to web service async call completed event and then triggers the webservice.
Deletegate's implementation actually sits back into in View-Model i.e. QuotationItemModel.vb and not inside Model. There is truly only one function in Model.vb
-There is no other logic in Model. i.e. Model.vb decides end points, http bindings, WCF stuffs
-There is no EDMX whatsoever in this solution. Model also knows nothing about database.
Second project (but inside third solution)
WCF implementation. Light weight. Again 1 function. Operation contracts only.
Code behind only pass business objects into third project.
Connection string for EDMX is configured here and passed to third project.
No other logic.
There is no awareness of EDMX whatsoever
Third project solution
-Begins with a simple factory to delegate logic and invoke classes
-Begins with simple factory logic becomes very heavy backend. Uses design patterns to alleviate maintenance concerns. From here, the patterns could criss cross between commands, strategy, or abstract types etc etc.
-The EDMX design is fully apparent in this layer
-Business objects interacts in logical manner with EDMX
-I either do LINQ to Entities or parameterized queries here
-This layer consist of business logic such as Underwriting ID must exist before a claim transaction can be issued. Or a quotation's running number sequence based on server date. etc etc
-There are some manual mapping of business objects to Entities. Potentially tedious but not always
-Result is passed back as XML
The third project could very well be separated solution with another lightweight webservice in between, producing readiness for 3 tier architecture. Then I will produce my own connection string to EDMX at this pure layer. But mine is now more like '2.5' layer 2 architecture. I sheepishly expose the connection string in middle tier's web config.
Architecture means having another hardware platform altogether. Layer are separation for domain driven design in problem space i.e. UI, communication and business domains. Technically speaking the database of SQL Server (beyond the EDMX) could very well sit in another architecture i.e. Windows Azure
There are pros and cons I see here. Please bring any criticisms gently, I am new to layering, really.
Cons
Without exposing data contracts my UI is blind when communicating in language of business objects and contracts. Previously this was easily achieved by having the EDMX in WCF layer.
I now used Xelement to represent shared business object. But I still need to figure a way to expose the data contract without exposing database internals. Currently, I 'instinctively' know and code the database fields in my Xelements.
Potentially it's like silent binding to backend EDMX. Silence is sometimes bad because if I get a column without data there are many suspected causes. Nothing that cannot be solved via good error messaging from the XML result passed-back. Using my imagination.
Weak mechanism for versioning. Perhaps new clients interacts with separate operation contract for a silent redirection to Backend-Ver 2.0 whilst the existing clients utilize Backend-Ver 1.0. This potentially mean you should now have 2 EDMX for each old and new database respectively
Pros
Extreme decoupling. I can delete/rebuild the EDMX and UI and WCF still compiles. Only my third solution will get compilation error in this extreme test effort.
From silverlight UI, triggering and communication to Microsoft Report Viewer report shares exactly same classes invoked from UI. There are no 'additional webservice function for report' whatsoever. Whatever EDMX + logic requested by UI exactly same for the report-- unless I chose it not.
PS: Silverlight communicates filter criteria to the report via query string.
The report again, is not aware of the EDMX. Example, if I delete the EDMX from backend and then update the data connection from report project and the report project still compiles without problems.
Readiness for migration to multiple architecture without tears. Seasonal load balancing, increase in customer base etc may trigger this investment in architecture.
Reusability of business logic. For example, if the boss gets tired of Silverlight, I just need to re-code the UI business objects, say, into JSON?? under HTML 5. There are no changes to business logic whatsoever, except new requirements. Example, to expand into Life Insurance to co-exist with existing General insurance, a module which is currently coded in Silverlight. Imagine Life Insurance in HTML 5 and still coexisting with same backend. Again, the reason is because both front end is not aware of EDMX I just need to focus on building data contract from within new technology.
Unexpected (I am new to layering really!) side effect. I potentially can test my backend separate from UI. Which in turn manipulate LINQ to Entity (that EDMX). Cool for unit testing.
Updating business logic does not effect new deployment to IIS (Middle layer) except maybe when it comes to versioning.
Anyway here's Layered Application Solution Guidance from talented software architect Serena Yeoh
Layered Architecture Sample for .NET
http://layersample.codeplex.com/
http://layerguidance.codeplex.com/
Notice in the sample which you download, the ingenuity of having multiple UI over different technologies over a common backend, where the EDMX live and sleep. And what's more, over windows workflow foundation, selectively called as needed. You can see where Serena put the EDMX and you have at it with workable running code. Pure bliss.

Struts2 and multiple active wizards / workflows

I'm currently working on a Struts2 application that integrates a wizard / workflow in order to produce the desired results. To make it more clear, there is a business object that is changed on three different pages (mostly with AJAX calls). At the moment I'm using a ModelDriven action (that's extended by all the actions working with the same business object) coupled with the Scope interceptor. While this works okay if the user is handling data for only one business object at a time, if the user opens the wizard for different objects in multiple tabs (and we all do this when we want to finish things faster) everything will get messy, mostly due to the fact that I have only one business object stored in the session.
I have read a few articles about using a Conversation Scope Interceptor (main article) and about using the Scope plug-in (here). However, both approaches seem to have problems:
the Conversation Scope Interceptor doesn't auto-expire the conversations, nor does it integrate properly with Struts2;
the Scope plug-in lacks proper documentation and the last build was made in 2007 (and actually includes some of the ideas written by Mark Menard when he defines his Conversation Scope Interceptor, though it doesn't use the same code).
Spring's WebFlow plug-in seems a bit too complex to be used at the moment. I'm currently looking for something that can be implemented in a few hours time, though I don't mind if you can suggest something that works as needed, even if it requires more time than I'd currently want to spend on this now.
So, seasoned Struts2 developers, what do you suggest? How should I implement this?
Okay this isn't a fully baked idea. But seeing as no else has provided anything, here is what I would start with.
1) See if you can move the whole flow into a single page. I'm a big believer in the less pages is better approach. It doesn't reduce complexity for the application at all, but the user generally finds the interface a lot more intuitive. One of the easiest ways to go about this is by using the json plugin and a lot of ajax calls to your json services.
2) If you must transition between pages (or simply think it is too much client side work to implement #1) then I'd look to the s:token tag. The very first page to kick off a flow will use this tag, which will create a unique value each invocation. You will store a map in your session of model objects. An action will need to be provided with a model by looking it up from the session.
There are a couple challenges with #2. One how do you keep the session from getting too many domain objects? a) Well it might not matter, if the session is set to say six hours you can be rather sure that over night they will get cleared up. b) provided a self management interface which can get/set/list objects in the session. It might be what you thought of at first but it would let a worker do a certain amount and then stop and work on another. If the unit of work has some meaningful name (an invoice number or whatever) it could be quite useful.
A little more sophistication would be to move the model objects out of the session and into the service layer. At which point when inserted you would set an insertion time. You would probably need a manager to hold each type of model object and each manager would have a daemon thread that would periodically scan the map of domain objects and clean out expired ones.
You can figure out more complicated system by kicking a flow off with a token and then using another token on each page. "flowId" and "currentPageId" respectively, then you can graph allowable transitions.
Mind you at this point spring web flow is starting to look pretty good.
There is now a conversation plugin for Struts2 that achieves all these goals with very little work required by the developer: http://code.google.com/p/struts2-conversation/
It has:
-nested conversations
-cleanup of dead conversations
-convention over configuration with annotations and naming conventions
-inherited conversations
-fully integrated with Struts2
-the conversation scope can also be used by Spring IoC container-managed beans
Hope it helps somebody.

Generate dynamic list from external database

I'm very new to Sharepoint and I am just wondering if it's possible to somehow use data from an external database as a List that users can select from within a form. How much extra development does this involve?
Essentially we have a different (non sharepoint) site that allows us to set up projects. Once a project is set up I would like the project code to be usable from within workflows, forms, etc.
If you have MOSS Enterprise, you may want to take a look at the Business Data Catalog (BDC). It allows you to bring external data to SharePoint. Supported inputs are either a database or a webservice.
http://msdn.microsoft.com/en-us/library/ms563661(office.12).aspx
After you have set up the BDC, you can use the business data field to use that external data as metadata. That way, you can make decisions in workflows based on that metadata. If you want to play with the BDC, get yourself a decent tool to generate the application definition files (xml), because creating them manually is cumbersome.
Consuming external data in forms is even easier. It doesn't require the BDC as you can use databases and webservices directly as a secondary datasource in InfoPath.
Another option (if you don't have MOSS Enterprise) is to create a timerjob that keeps a list up-to-date based on the projects found. Of course, this is not updated in realtime and requires some programming.

Resources