What is an Inbox? - business-process-management

I really don't know anything about BPM.
In the BPM context, what exactly is an Inbox?
Is it a common place for emails like everywhere or is it a list of tasks to do?

you should try: http://www.bpm.com/Docs/Q2%20Business%20Agility_BPM%20for%20Dummies.pdf
But yes, an inbox is usually a screen containing a list of the pending task for a particular user.
Cheers

In a BPM package you build business processes, for example a hire employee process. After the process has been designed you can instantiate it. In this example if you want to hire an employee you instantiate that process. An instantiation of this process is a case. And cases are what you see in your BPM Inbox. More specific, a case of which a task is assigned to you which is not completed yet. Look at it as your BPM work to do list.

BPM (business process management) is a standard way to design & excute "business processes" in an enterprise environment. Usually this involves 'human interaction' with the system(in plain terms some code that defines the flow of the business process), although thats not necessary always. This human interaction with the system processes is called 'workflow'.
So In case if there is any human activity in this automated process, this is communicated to the user typically in email -like interface called workflow user interface, which he can use to perform his action such as approve, reject, escalate to next level etc. So a BPM user's inbox usually contains the jobs that need his attention depending on his user profile as defined by the BPM system's user management component.

Related

How to manage Data consistency between two microservices?

I have two microservices, e.g M1 and M2. M1 is responsible for managing user transactions e.g orders. When an order is completed, the summary data is sent to M2 via Message bus. M2 is reponsible for generating reports on orders. Our transactions completes without checking if the message was processed successfully or not by M2. The problem is that some orders does not appear in the reports as the messages are not processed successfully(because of any Random Issue). What is the best way to make the data consistent between the two services. I am implementing a mechanism to pull data from M1 and identifying the gaps using the reference numbers(Its is a sequencial number) which I know is not a good approch as I may not know the last reference number that I have is actually the last. Any suggestions or improvements will be highly appreciated.
Thanks.
I have tried a pull data mechanism but I do not think that a good idea.
you may take a look at SAGA Pattern.
Saga goal is to form a Service span Transaction with capability of Compensate transaction.also you can use two phase commit pattern 2PC but it is not recommended mostly since it is resource span transaction which means it occupies resource until end of transaction and thats not good unless you insist immediate transaction which is short enough to release the resources soon as possible.
back to saga, you may hear the term called routing Slip. the routing slip forms chained steps to fulfill the transaction over services. this routing slip can be implemented in two ways of choreography and orchestration. in case of failure happened in any steps the compensation will triggered for all steps that are done. this compensate may be a rollback or any other strategy that should take place. eg :
order added
the inventory item allocated
shipping service proceeded with error and compensate take place as
the allocated item in inventory released
order removed or canceled
i use Masstransit.
read SAGA State Machine where the steps in transaction are coordinated and updates the state for the transaction
read Courier Routing Slip where the routing slips for transaction are defined.
watch Masstransit Serries tut by Chris Patterson
Also, please consider retry policy in case of failure occurred. it can be done with Masstransit configuration.

In which real life application is BPM actually used?

What is the role of BPM (BussinessProcessManagement) in a realtime application?
I mean in which cases or where is BPM actually used?
I could get the documentation from the net, but where actually is it used?
One definition of BPM is that is combines workflow management (humans interacting) with enterprise application integration (EAI, systems interacting). You can get applications and tools for those applications - sometime embedded - which you can use to define you business process. Then, in a staging process, you can roll out the business process to that system or a set of applications. To execute a business process (BP) you can use a business process engine (e.g. jBPM), and each step of a business process can be represented by a user interaction, a user task, a system task or variations of these. You can have parallel business steps that only once all of these complete can go to the next business process step. And there is much more to that.
Once you have rolled out a business process you can monitor it and collect data for a number of parameters. Often you will be interested how long a business process took and what the limiting factors are, e.g. how many manual steps are required. Then you can go back to the designer tool and modify the process. With data taken from the production system you can then simulate whether your changes are actually an improvement. And if they are you roll it out replacing the previous version.
On a smaller scale you can workflow designer or business process designers to allow user changing a particular part of a process. Often applications have hard-coded business process support with only limited parameterization/configuration. Take an approval process. Which purchases need approval? By whom? Could multiple people approve in parallel thus shortening the purchasing process? You application may offer your users to design the approval process as needed. It would no longer be hard-coded and allow for much better adaptation to the needs of your customer.
These are just a few thought and definitely not a full coverage of this subject. But maybe it already gives some idea.
BPM can be used in Health care application for integrating various systems and in banking field for loans aplication processing where human involvement is involved in different steps for loans approval, and in health care industry it is used mostly used for bundle of claims which we receive every day for cliams processing and for business state in which state to know.
and in The airline Reservation System for ticket booking process how the process moves one one level to another ex: adding passenger to flight.

Implementing an Online Waiting Room

My organization is building a new version of our ticketing site and is looking for the best way to build an online waiting room when the number of users in our purchase path exceeds a certain limit. The best version of this queue would let new users in after existing users have either completed their purchase or have exceeded a timeout limit after entering the path.
I'm trying to get an idea of how this has been implemented by other organizations. Has anyone out there done something similar or have any experience with this? We have some ideas, but I'd like to get a sense of what solutions have been tried and what problems those solutions have run up against.
Just to be complete, this site is being built in Ruby on Rails, though I'd love to hear about how people have solved this regardless of platform.
Edit: To clarify: The need for the queue is not primarily to reduce load, but to limit the speed at which the web is purchasing tickets relative to people buying in other ways, like over the phone.
Before I outline one method for this, I want to point out that what you want to do doesn't make a lot of sense. Services on the web aren't like a physical store, where I can walk up and see that it's crowded and decide to stay or not. Queueing people on your site strikes me as shifting the blame from you (unable or unwilling to adequately provision resources) to me (punishing me for trying to use your site).
If you're selling something like show tickets, where quantity is limited and each item is tied to a seat, I think it's better to reserve items and time out those reservations if they aren't paid for in a timely manner. Ticketmaster does this, and I think it's a much better solution than blocking people at the door.
If you still want to go down this path, then I'd design the system like this:
As customers come to your site, record their arrival time. As they interact with the site, record a "last seen" time. "Last seen" will be used to determine activeness. You'll need a background job running very frequently to expire sessions quickly.
Once your limit is hit, you have an ordered queue of people who are blocked. As customers complete their transaction or time out, you'll mark the next person in the queue for entry into the purchase path.
For queued users, their browsers will make a request on a regular basis, checking to see if you've let them in yet. If yes, they proceed to the purchase path. If no, they continue to wait.
The purchase path needs a mechanism to check if someone is trying to circumvent your waiting area, and sends them back.
You might find the Online queuing for ticketing guide helpful. Check their repository at GitHub.
They've integration with Ruby On Rails, PHP, .NET, iOS, Android and similar platforms.
Queue-it enables you to gain control of website overload during extreme traffic peaks by offloading end users into an online queue.
When a peak traffic event occurs on a website, the online queue system sends users to the virtual waiting room environment where the users wait and are redirected back to the website at a rate it can handle.

Keeping applications and infrastructure connected

I work in an IT department that is divided into two groups. One group develops and manages applications, the other manages the company's infrastructure and servers. One of the problems we face is a break down in communication. I work for the application group and one of the problems I have is not being notified when a server is taken down by infrastructure, or a database is being refreshed.
Does anyone have suggestions on how to improve communications between the two groups or any ideas on how to keep a light-weight log across multiple systems (both linux and windows)? Ideally it would be nice if we could have our boxes just tweet their statuses or something.
Thanks for the help,
Ben
One thing you could do to communicate server status is to have our Infrastructure group setup a network monitoring system like Nagios. This will give everyone in your application group the ability to get a snapshot view of the status of every server in the system. Having this kind of status is invaluable when you are doing development.
Nagios gives you network monitoring, but also allows you to show scheduled down time for a particular server in the system.
Another thing your group could do to foster communication with the Infrastructure is to have your build system report which servers it is currently using for building and testing your products.
Also, setting up regular meeting between stakeholders of both groups is probably a good idea too. If you all are talking to each other, even for 15 minute a week, you'll probably see incidents like the one you described above go down quite a bit.
I think this is a bigger issue of change control.
You should have hardware and software change control and an approval process.
Ultimately, infrastructure serves you - the purpose for IT infrastructure is to run applications.
In my current large financial data company, servers are not TOUCHED without proper authorization through the client and application groups. It seems like a huge pain, but every single server is there for a reason - to meet a specific business goal and run a specific application. There is simply no excuse for the infrastructure group to be changing things or upsetting servers on their own volition.
Response to critical hardware failure might be an exception.
Needed software and OS updates are handled through scheduled maintenance windows and an approved change process.
I like the Nagios idea as well. If you want to setup something that's more of a communication tool, I would recommend a content management system like Drupal.
We use Drupal internally to communicate between teams. When one team takes a server down, they would add an event into Drupal. The rest of us would either get it as an email, an RSS item or just by refreshing the page.
Implement a change control process where changes are submitted, approved and scheduled for BOTH groups. This lets everyone know what is going on. This process can be as light or heavy-weight as you want.

Using MSTest as site/environment monitoring tool

We currently use Hp SiteScope for monitoring synthetic transactions across some of our web apps. This works pretty well except for the licensing cost for each synthetic transaction makes it prohibitive to ensure adequate coverage across our applications.
So, an alternative would be to use SiteScope's URL monitoring which can basically call a URL and then provide some basic checks for the certain strings. With that approach, I'd like to create a page that either calls a bunch of pages or try to tap into a MSTest group somehow to run tests.
In the end, I'd like a set of test cases that can be used against multiple environments to be used for production verification, uptime, status, etc.
Thanks,
Matt
Have you taken a look at System Center Operations Manager 2007?
I'm just getting started, but it appears to do what you are describing in your question.
We are looking to monitoring our data center and the a web application...from the few things I have found on the web it is going to fit our need.
Update
I've since moved to Application Insights. A great overview can be found here, https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/
There are two methods one can use, a simple ping, or record a multi-step synthetic user "experience". Basically you act as a user, and using IE and a Visual Studio Web Test project you record navigating around your site and upload that file to Azure.
For example, I record logging in, navigating a few pages, and then logging out. As long as all of those events happen in a timely manner the site is in a good operating state.
If the tests fail, take too long to respond for example, I'll get an email alerting me something isn't exactly right.

Resources