How to keep conversation data in MS Bot framework - save

I am working with Microsoft bot development framework, using its node.js sdk.
I have been looking for a way to save all the messages of a conversation. I set persistConversationData to true, and tried to access the conversationData using session.conversationData. However, it is empty.
1- Is there a builtin method to access all the messages within a conversation?
2- If persistConversationData is not for that, can anyone please explain its usage.
Thank you so much.

By default, messages will not be persisted by the Microsoft Bot Framework. For stateful operations, you can use the Bot State API the following ways:
Set userData. The persisted data will be available to the same user across different conversations.
Set conversationData. The persisted data will be available to all the users within the same conversation.
Set privateConversationData. The persisted data will be available to the given user in the given conversation.
Set dialogData for storing temporary information in between the steps of a waterfall.
According to the documentation, conversationData is disabled by default. If you want to use it, you have to set persistConversationData to true.
tl;dr You have to take care of persistence for yourself. E.g.
// ...
var bot = new builder.UniversalBot(connector, { persistConversationData: true });
bot.dialog('/', function (session) {
let messages = session.conversationData || [];
messages.push(session.message);
session.conversationData = messages;
});

Related

Realm Swift: Question about Query-based public database

I’ve seen all around the documentation that Query-based sync is deprecated, so I’m wondering how should I got about my situation:
In my app (using Realm Cloud), I have a list of User objects with some information about each user, like their username. Upon user login (using Firebase), I need to check the whole User database to see if their username is unique. If I make this common realm using Full Sync, then all the users would synchronize and cache the whole database for each change right? How can I prevent that, if I only want the users to get a list of other users’ information at a certain point, without caching or re-synchronizing anything?
I know it's a possible duplicate of this question, but things have probably changed in four years.
The new MongoDB Realm gives you access to server level functions. This feature would allow you to query the list of existing users (for example) for a specific user name and return true if found or false if not (there are other options as well).
Check out the Functions documentation and there are some examples of how to call it from macOS/iOS in the Call a function section
I don't know the use case or what your objects look like but an example function to calculate a sum would like something like this. This sums the first two elements in the array and returns their result;
your_realm_app.functions.sum([1, 2]) { sum, error in
if let err = error {
print(err.localizedDescription)
return
}
if case let .double(x) = result {
print(x)
}
}

Get firebase message from sender and receiver

I am using firebase for simple chat application. I am looking to fetch messages of user 1 and user 2. My database structure look like this.
I am using queryOrdered to fetch messages but it will filter with either sender or receiver.
Database.database().reference().child("Chats").queryOrdered(byChild: "receiver")
.queryEqual(toValue: "2WCS7T8dzzNOdhEtsa8jnlbhrl12")
.observe(.value, with: { snapshot in
})
But using this code I got messages of receiver only. How can I receive messages of sender as well. Can I use AND condition to query sender child?
There is no way in your current data structure to with a single query get both the messages that a specific user sent and received.
Firebase Database queries can only order/filter on a single property. In many cases it is possible to combine the values you want to filter on into a single (synthetic) property.
For example, you could combine the sender and receiver UID into a single properties "sender_receiver": "2WCS7T8dzzNOdhEtsa8jnlbhrl12_Sl91z...." and then query for that combined value.
For a longer example of this and other approaches, see my answer here: Query based on multiple where clauses in Firebase
For most chat applications however I recommend creating a database structure that models chat "rooms". So instead of having one long list of all messages, store the messages in rooms/threads, which is also what you show in the UI of your app.
"-RoomId1": {
"msg1": { ... },
"msg2": { ... }
}, "-RoomId2": {
"msg3": { ... },
"msg4": { ... }
}
That way you have the most direct mapping from the database, to what you show on the screen.
For a good way to name the "rooms", see Best way to manage Chat channels in Firebase
You'd then still store a mapping for each user of what rooms they're a part of, which might also be useful in securing access.

How to reload a ResourceTable programmatically in Laravel Nova?

I have a custom resource-tool (ledger entry tool) that modifies values of a resource as well as insert additional rows into related resources.
"Account" is the main resources.
"AccountTransaction" and "AccountLog" both get written to when a ledger entry is created. And through events, the account.balance value is updated.
After a successful post of a ledger entry (using Nova.request) in the resource-tool, I would like the new balance value updated in the account detail panel, as well as the new entries in AccountTransaction and AccountLog to be visible.
The simple way would be to simply reload the page, but I am looking for a more elegant solution.
Is it possible to ask these components to refresh themselves from within my resource-tool vue.js component?
Recently had the same issue, until I referred to this block of code
Nova has vuex stores modules, where they have defined storeFilters.
Assigning filters an empty object and then requesting them again "reloads" the resources. Haven't done much more research on this matter, but if you are looking for what I think you are looking for, this should be it.
async reloadResources() {
this.resourceName = this.$router.currentRoute.params.resourceName || this.$router.currentRoute.name;
if (this.resourceName) {
let filters_backup = _.cloneDeep(this.$store.getters[`${this.resourceName}/filters`]);
let filters_to_change = _.cloneDeep(filters_backup);
filters_to_change.push({});
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_to_change);
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_backup);
}
},

Where in the Admin site of EventStore I can view my saving events?

By the way how do you create a STREAM?
I use AppendToStreamAsync directly, is this right or shall I create a
stream first then append onto this stream?
I also tried performing some tests but using the methods below I can write
events onto EventStore but can't read Events from it.
And most import question is how do I view my saving events in the Admin site of EventStore?
Here are the code:
public async Task AppendEventAsync(IEvent #event)
{
try
{
var eventData = new EventData(#event.EventId,
#event.GetType().AssemblyQualifiedName,
true,
Serializer.Serialize(#event),
Encoding.UTF8.GetBytes("{}"));
var writeResult = await connection.AppendToStreamAsync(
#event.SourceId.ToString(),
#event.AggregateVersion,
eventData);
Console.WriteLine(writeResult);
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
public async Task<IEnumerable<IEvent>> ReadEventsAsync(Guid aggregateId)
{
var ret = new List<IEvent>();
StreamEventsSlice currentSlice;
long nextSliceStart = StreamPosition.Start;
do
{
currentSlice = await connection.ReadStreamEventsForwardAsync(aggregateId.ToString(), nextSliceStart, 200, false);
if (currentSlice.Status != SliceReadStatus.Success)
{
throw new Exception($"Aggregate {aggregateId} not found");
}
nextSliceStart = currentSlice.NextEventNumber;
foreach (var resolvedEvent in currentSlice.Events)
{
ret.Add(Serializer.Deserialize(resolvedEvent.Event.EventType, resolvedEvent.Event.Data));
}
} while (!currentSlice.IsEndOfStream);
return ret;
}
Streams are created automatically as you write events. You should follow the recommended naming convention though as it enables a few features out of the box.
await Connection.AppendToStreamAsync("CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba", expectedVersion, creds, eventData);
It is recommended to call your streams as "category-id" - (where category in our case is the aggregate name) as we use are using DDD+CQRS pattern
CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba
The stream matures as you write more events to the same stream name.
The first events ID becomes the "aggregateID" in our case and then each new
eventID after that is unique. The only way to recreate our aggregate is
to replay the events in sequence. If the sequence fails an exception is thrown
The reason to use this naming convention is that Event Store runs a few default internal projection for your convenience. Here is a very convoluted documentation about it
$by_category
$by_event_type
$stream_by_category
$streams
By Category
By category basically means there is stream created using internal projection which for our CustomerAggregate we subscribe to $ce-CustomerAggregate events - and we will see only those "categories" regardless of their ID's - The event data contains everything we need there after.
We use persistent subscribers (small C# console applications) which are setup to work with $ce-CustomerAggregate. Persistent subscribers are great because they remember the last event your client acknowledged. So if the application crashes, you start it and it starts from the last place that application finished.
This is where event store starts to shine and stand out from the other "event store implementations"
Viewing your events
The example with persistent subscribers is one way to set things up using code.
You cannot really view "all" your data in the admin site. The purpose of the admin site it to manage projections, manage users, see some statistics, create some projections, and have a recent view of streams and events only. (If you know the ID's you can create the URL's as you need them - but you cant search for them)
If you want to see ALL the data then you use the RESTfull API using by using something like Postman. Maybe there is a 3rd party software that can create a grid like data source viewer but I am unaware of this. That would probably also just hook into the REST API and you could create your own visualiser this way quite quickly.
Again back to code, you can also always read all events from 0 using on of the libraries - which incidentally using DDD+CQRS you always read the aggregates stream from 0 to rebuilt its state. But you can do the same for other requirements.
In some cases looking at how to use snapshots makes replaying events allot faster, if you have an extremely large stream to deal with.
Paradigm shift
Event Store has quite a learning curve and is a paradigm shift from conventional transactional databases. Event Stores best friend is CQRS - We use a slightly modified version of the CQRS Lite open source framework
To truly appreciate Event Store you would need to understand DDD concepts and then dig into CQRS/ES - There are a few good YouTube videos and examples.

What is the difference between new sap.ui.model.odata.ODataModel and read?

I am playing around with a OData service and I am very confused when to use this
var oModel = new sap.ui.model.odata.ODataModel("proxy/http/services.odata.org/V3/(S(k42qhed3hw4zgjxfnhivnmes))/OData/OData.svc");
this.getView().setModel(oModel);
vs
var oModel = new sap.ui.model.odata.ODataModel("odatserviceurl", true);
var productsModel = new JSONModel();
oModel.read("/Products",
null,
null,
false,
function _OnSuccess(oData, response) {
var data = { "ProductCollection" : oData.results };
productsModel.setData(data);
},
function _OnError(error) {
console.log(error);
}
);
this.getView().setModel(productsModel);
I have two working example using both approach but I am not able to figure out why using read method if I can achieve same with first version. Please explain or guide me to the documentation which can clear my confusion.
Ok, lets start with the models:
JSON Model : The JSON model is a client-side model and, therefore, intended for small datasets, which are completely available on the client. The JSON model supports two-way binding. NOTE: no server side call is made on filtering, searching, refresh.
OData Model : The OData model is a server-side model: the dataset is only available on the server and the client only knows the currently visible rows and fields. This also means that sorting and filtering on the client is not possible. For this, the client has to send a request to the server. Meaning searching/filtering calls odata service again.
Now, lets look at scenarios where we will use these models:
Scenario 1: Showing data to user in a list/table/display form. Data manipulation is limited to searching and filtering. Here, I would use oData model directly to controls as only fetching of data is required.( your method 1) (NOTE: One way binding). Remember here all changes require a call to server.
Scenario 2: I have an application which has multiple inputs, user can edit changes, also some fields are calculated and mandatory. All in all, many user changes are done which may be temporary and user might not want to save them. Here, you dont want to send these temporary changes to backend as yet. You way want to manipulate, validate data before sending. Here, we will use JSON Model after reading data from odata model ( your method 2). Store the changes in local JSON model, validate and manipulate them and finally send the data using Odata create/update. Remember here all changes DO NOT require a call to server as data is present in local JSON MODEL.
Let me know if this helps you. :)
EDIT : Additional Information :
As per your comment :
Documentation says oModel.read' trigger get request but new sap.ui.model.odata.ODataModel("proxy/http/services.odata.org‌​/V3/(S(k42qhed3hw4zg‌​jxfnhivnmes))/OData/‌​OData.svc")` does the same thing so why and when to use oModel.read
Here, is where you misunderstood. The code
new sap.ui.model.odata.ODataModel("proxy/http/services.odata.org‌​/V3/(S(k42qhed3hw4zg‌​jxfnhivnmes))/OData/‌​OData.svc") will NOT send a read/get Request. It calls the odata services and fetches the metadata of the service. A service can have multiple entities.
For example: the service :http://services.odata.org/Northwind/Northwind.svc/ has mutiple entity sets such as Categories, Customers, Employees etc. So, when I declare : new sap.ui.model.odata.ODataModel("http://services.odata.org/Northwind/Northwind.svc/") it will fetch the metadata for service (not actual data). Only when you call the desired entity set, it will fetch the data. The Entity set is specified :
When you call the read method ( like you have specified '/Products')
Bind the entity set name directly to control like to List,Table etc ( items='{/Products}' )

Resources