Im working on a project that involves facial expressions, my question is, is it possible to use Watson conversation to guide the conversation based on the facial expression ie: sad, happy, angry.
For example if your expression is sad, watson conversation must lead the convo to make you feel better etc..
Yes it is possible - you can make Watson conversation follow any logic which you define in your application (including the scenario with emotions) by means of using context variables.
(see more under: https://console.bluemix.net/docs/services/conversation/develop-app.html#building-a-client-application)
The high level idea is:
your application is attached to a service which provides current emotion of the user
when interacting with Watson the application puts this emotion as a variable into context of the conversation.
you use this variable in conditions of dialog nodes in order to control where the conversation goes depending of the user mood
More specific implementation details will vary depending on how/when you fetch user emotions.
Related
When annotating mentions using Watson Knowledge Studio, one often has examples such as:
"I received no feedback from in response to ..." or "I have never received any feedback".
If I were to annotate the mention "feedback" in the above, it is an "negative" example, i.e. it refers to something that did not happen. There are two possibilities when creating a custom entity type system:
(a) Include the negator in the mention, i.e. "no feedback" is the mention. This clearly does not work in the second example, since there is no negator before the work feedback.
(b) Do not include the negator in the mention, but add an attribute to the mention using the mention class NEG (https://www.ibm.com/watson/developercloud/doc/wks/wks_t_ts_intro.shtml)
Clearly option (b) is the more general approach. However, once a model is trained one needs to be able to extract entities unseen examples. For this you have to use the Natural Language Understanding API (https://www.ibm.com/watson/developercloud/doc/natural-language-understanding/#entities).
When one uses this API, there doesn't seem to be a way to extract the mention attributes, i.e. when I do entity extraction, how do I see that a mention is negated or not, since using approach (b) you do not include the negator as part of the mention?
as I mentioned here IBM Watson Knowledge Studio - Entities with Role attribute and extracting it from NLU api currently mention attributes such as negation are not extractable in NLU.
I have recently been tasked to look into Workflow Foundation. The actual goal would be to implement a system in which the end users can define custom workflows in the deployed application (and of course, use them). Personally I have never used WF before (and reading around here on SO people are very doubtful about it - so am I reading those questions/answers), and I am having a hard time finding my way around it given the sparse learning resources available.
Anyway, there are some questions, for example, this, which mention something they call dynamic or user-defined workflows. They point out that WF makes it possible to "rehost" the designer, so that end-users can define their own new workflows after the application is deployed (without developer intervention (?), this is the part I am not really sure about).
I have been told by fellow employees that this way we could implement an application in which once this feature is implemented we would no longer have to keep modifying the application every time a new workflow is to be implemented. However, they also pointed out that they just "heard it", they don't have firsthand experience themselves either.
I have been looking around for samples online but the best thing I could find was a number guess app - barely more than a simple hello world. So not much that would point me to the right direction of how this user-defined workflow feature actually works and how it can be used, what its limitations are etc.
My primary concern is this: it is alright that one can define custom workflows but no workflow is worth a penny without the possibility of actually inputting data throughout the process. For example, even if the only thing I need to do is to register a customer in a complaint management system, I would need the customer's name, contact, etc. If the end user should be able to define any workflow the given toolset makes possible then of course there needs to be a way to provide the workflow consumers with a way of inputting data through forms. If the workflow can be of pretty much any nature then so needs to be the data - otherwise if we need to implement the UIs ourselves then this "end-user throws together a workflow" feature is kind of useless because they would still end up at us requiring to implement a form or some sort of data input for the individual steps.
So I guess that there should be a way of defining the "shape" of the data that needs to be filled at any given user interaction phase of the workflow which I can investigate and dynamically generate forms based on the data. So for example, if I found that the required data was made up of a name and a date of birth, then I would need to render a textbox and a datepicker on the page.
What I couldn't really figure out from the Q&As here and elsewhere is whether this is even possible. Can I define and then later "query" the structure of the data to be passed to the workflow at any point? If so, how? If not, how should this user-defined workflow feature even be used, what is it good for?
To clarify it a little, I could imagine something as specifying a complex type, which would be the view model (input model) in a regular MVC app, and then I could reflect over it, get the properties and render input fields based on that.
Windows Workflow Foundation is about machine workflows, not business workflows. True, it is the foundational tool set Microsoft created for building their business workflow products. But out of the box WWF does not have the components you need to quickly and easily build business workflows. If you want to send an email in a workflow, you have to write that from scratch. Just about anything you can think of doing from a business point of view you have to write from scratch.
If you want to easily create business workflows using Microsoft products check out the workflow stuff in SharePoint. It is the easiest of the Microsoft products to work with (in my experience.) If that does not meet your needs there are other products like BizTalk.
K2 is another company with a business workflow product that uses WWF as their base to more easily build business workflows, the older K2 products actually create web pages automatically to collect the data from the user.
WWF is very low level, arguably it lost traction after they re-wrote the whole thing in 4.0. While not publically stated by Microsoft, my personal opinion is Service Fabric (from Microsoft) achieves the goals WWF originally tried to solve which was a "more robust programming environment."
I want to develop a app/software which understand text from various input and make Decision according to it. Further if any point the system got confused then user can manual supply the output for it and from next time onwards system must learn to give such output in these scenarios. Basically system must learn from its past experience. The job that i want handle with this system is mundane job of resolving customer technical problems.( Production L3 tickets). The input in this case would be customer problem like with the order( like the state in which order is stuck and the state in which he wants it to be pushed) and second input be the current state order( data retrieved for that order from multiple tables of db) . For these two inputs the output would be the desired action to be taken like to update certain columns and fire XML for that order. The tools which I think would required is a Natural Language processor( NLP) library for understanding text and machine learning so as learn from past confusing scenarios.
If you want to use Java libraries for your NLP Pipeline, have a look at Opennlp.
you've a lot of basic support here.
And then you've deeplearning4j where you've a lot of Neural Network implementations in java.
As you want a Dynamic model which can learn from past experiences rather than a static one, you've a number of neural netwrok implementations which you can play with in deeplearning4j.
Hope this helps!
Hi all I'm doing the edx course and in this answer I get an error
So if any of you knows the correct answer please tell me why I get an error, if I select all options as true.
Which of the following are true about user stories? Select all that apply.
They should describe how the application is expected to be used
They should have business value
They do not need to be testable
They should be implemented across multiple iterations of the Agile lifecycle
From the SaaS textbook:
The BDD version of requirements is user stories, which describe how the application is expected to be used. They are lightweight versions of requirements that are better suited to Agile. User stories help stakeholders plan and prioritize development. Thus, like BDUF, you start with requirements, but in BDD user stories take the place of design documents in BDUF.
...
User stories came from the Human Computer Interface (HCI) community. They developed them
using 3-inch by 5-inch (76 mm by 127 mm) index cards, known as “3-by-5 cards.” (We’ll see other examples of paper and pencil technology from the HCI community shortly.) These cards contain one to three sentences written in everyday nontechnical language written jointly by the customers and developers. The rationale is that paper cards are nonthreatening and easy to rearrange, thereby enhancing brainstorming and prioritizing. The general guidelines for the user stories themselves is that they must be testable, be small enough to implement in one iteration, and have business value.
Therefore, the answer to:
Which of the following are true about user stories? Select all that apply.
They should describe how the application is expected to be used
They should have business value
They do not need to be testable
They should be implemented across multiple iterations of the Agile lifecycle
is (1) and (2).
We have an ERP system made in java that we will adapt to 3-tier architecture, and we want to add transaction controls (JTA).
We read that the best way to analyze where to place the controls was to create a graph of the system scenarios using BPM and then adding controls to the graph.
the web give us 2 ways to make the graph:
By way of use (scenarios) of each module, adding to the graph the
different routes that can be done by using a module, for example: in
the invoice module the different ways to complete it (with detail,
without detail, etc...)
By relation between the modules, adding to the graph how passes from
module to module, for example in invoice how passes to client
account
Our question are:
Which is the best way?
Is there another way to do that?
Definitely, using a BPM solution like jBPM will help you to define your business scenarios and discover the interaction between the different departments and modules in your company. If you want to use BPM there will be some things to learn, I would suggest you to take a look at BPM solutions and see if that can help you in your specific implementation.