I have an iOS App where a sequence of three events spread across three Pages constitute a single Journey.
I can log each page view and single events via flurry.
But can I log the entire sequence as a single event ?
PS: I considered using timed events but didn't seem like the best fit.
Are you trying to track the conversion of this particular journey. If yes, then this is definitely possible by creating a funnel of these three specific events. For more information on funnels please refer: http://support.flurry.com/index.php?title=Analytics/Overview/Lexicon/FunnelAnalysis
However, if you are trying to log this particular journey as one single event, then that also can be done. You need to flag all the three events/pages and create an event, which gets triggered/logged when all three flags are true.
If you need further details please reach out to us in support#flurry.com.
Thanks
Related
I have a contact flow that is using a pre-recorded voice prompt with a lex bot for voice rec. This is the main menu verbiage:
“Thank you for calling. If you would like to use your keypad to select the menu options, say “keypad”, otherwise please listen to the following menu options. For billing questions, say “billing”. To report a missed pickup, say “missed pickup”. If you are a current customer with recycling or other account questions, say “other”. If you are not a current customer, and have questions, say “sales”. To hear the menu again, say “repeat menu”. For all other questions, please remain on the line.”
I have set the error handling in the Lex bot to speak "Sorry, I'm having a hard time understanding you. Let's try using the keypad instead to make sure we route your call properly."
This is working when an utterance is not matched or an invalid option is spoken or pressed. However, I cannot figure out if it's possible to allow the lex bot to timeout like in a normal DTMF contact flow and send the caller to the next step in the menu without playing the error handling in from the Lex bot.
Is this possible?
That's the thing, Lex is not meant to be used this way. It MUST have an input to process, and if it reaches Lex's timeout, then it will always return an error and deliver the error handling response.
So you will have to get fancy in the Connect Flow to catch the Lex error message, and turn it into your own handling of it. But it will be hard to know whether Lex is erroring because it didn't understand, or because the user chose not to respond.
Therefore, I would personally avoid building the bot in a way that allows the user to remain silent. The user must direct Lex every step of the way and have easy ways of backing out of an unwanted action.
Remember that Lex is much more powerful than the old automatic call systems, so trying to force Lex into that old system won't work well. Depending on how you design your bot, you can make the conversation much much more natural, accepting a very wide range of responses and directing those into proper actions.
Tips:
Things may have changed more recently, but when I was building Lex/Connect, it was not possible for the user to interrupt a playback message. So I had to also avoid what you are trying to do in the welcome message:
If you would like to use your keypad to select the menu options, say “keypad”, otherwise please listen...
Naturally, a user who does want to use the keypad will try to immediately say "keypad" and probably get frustrated by having to listen to the rest of the playback message. So I design every playback message to be short, deliver information first, and always end on the question. Often breaking the conversation up into more branching points to make the questions as specific as possible.
Don't worry about going back and forth with the user too many times. It gives the user comfort knowing they are on the right path to what they want and are able to control the conversation in smaller steps. They will get stressed, having to listen to long list of options and remembering what they are while figuring out which one best applies to them.
So make each question as clear as possible and avoid spoonfeeding options. It feels less natural to explicitly state to the user what they should say:
To report a missed pickup, say “missed pickup”.
That is unnatural.
A good middle ground would be asking one question with a list of options and pausing between each option. The user will understand that these are responses they should make, but won't feel unnaturally pressured into exact phrases. For example:
Would you like to, check your billing, report a missed pickup, ask about sales, or something else?
That is natural.
We are comfortable handling those types of questions because we often do that when speaking with humans. You may even want to use a question mark instead of commas so that the playback voice uses a questioning intonation with each option. It looks less natural in written form, but would probably sound more natural.
Last tip: Don't design your bot based on your experience talking with bots. Design your bot based on your experience talking with humans.
I'm working on a little WebVR game using Hayden Lee's library, Networked AFrame and I'd like to place the users on a specific position as soon as they arrive in the networked room.
I've tried using the 'onConnect' callback, but when it's called the NAF object connectList is empty, so I cannot know if I'm the first one in the room or if other clients are already connected.
What would be the best way to get this kind of information, I can't find information about it in the docs.
Thanks for your help!
Currently in Networked-Aframe you can only control the position of entities that you have created and there is not a mechanism for determining how many people are in the room.
The only way to do what you're suggesting with NAF 0.2.3 is to set an arbitrary wait period after the onConnect callback, say 10 seconds, in which you hope that all the other users connect to the room. If there are anomalies that take longer and you end up with a collision of two people choosing the same position you react to that collision (which is also hard given there isn't events for users connecting yet). NAF 0.3.0 will at least have events for other users joining.
I'm trying to use Asana events API to track changes in one of our projects, more specific task movement between sections.
Our workflow is as follows:
We have a project divided into sections.
Each section represents a
step in the process. When one step is done, the task is moved to
section below.
When a given task reaches a specific step we want to pass it to an external system. It doesn't have to be the full info - basic things + url would be enough.
My idea was to use https://asana.com/developers/api-reference/events to implement a pull-based mechanism to obtain recent changes in tasks.
My problems are:
Events API seem to generate a lot of information, but not the useful ones. Moving one single task between sections generates 3 events (2 "changed" actions, one "added" action marked as "system"). During work many tasks will be moved between many sections, but I'm interested one in one specific sections. How can I finds items moved into that section? I know that there's a
resource->text field, but it gives me something like moved from X to Y (ProjectName) which probably is a human readable message that might change in the future
According to documentation the resource key should contain task data, but the only info I see is id and name which is not enough for my case. Is it possible to get hold on tags using events API? Or any other data that would allow us to classify tasks in our system?
Can I listen for events for a specific section instead of tracking the whole project?
Ideas or suggestions are welcome. Thanks
In short:
Yes, answer below.
Yes, answer below.
Unfortunately not, sections are really tasks with a bit of extra functionality. Currently the API represents the relationship between sections and the tasks in them via the memberships field on a task and not the other way.
This should help you achieve what you are looking for, I think.
Let's say you have a project Ninja Pipeline with 2 sections Novice & Expert. Keep in mind, sections are really just tasks whose name ends with a : character with a few extra features in that tasks can belong to them.
Events "bubble up" from children to their parents; therefore, when you the Wombat task in this project form the Novice section to Expert you get 3 events. Starting from the top level going down, they are:
The Ninja Pipeline project changed.
The Wombat task changed.
A story was added to the Wombat task.
For your use case, the most interesting event is the second one about the task changing. The data you really want to know is now that the task changed what is the value of the memberships field on the task. If it is now a member of the section you are interested in, take action, otherwise ignore.
By default, many resources in the API are represented in compact form which usually only includes the id & name. Use the input/output options in order to expand objects or select specific fields you need.
In this case your best bet is to include the query parameter opt_expand=resource when polling events on the project. This should expand all of the resource objects in the payload. For events of type: "task" then if resource.memberships[0].section.id=<id_of_the_section> is true, take action, otherwise ignore.
My app syncs 3rd party step data to HealthKit. However, iPhone itself will also do step tracking, user ends up seeing data from two sources added together.
In Health App, it claims that the top source gets displayed, however it's not the real case. Data from all sources will be added together, which is bad for users. Is there a way to resolve this?
Health kit may have duplicate data entries from different sources.
For instance, the M7/M8 processor and Jawbone UP will both add step counting samples to health kit.
When you use a statistics query, health kit will provide the correct statistics, based on the order the sources are set in the health up.
Otherwise, you will need to either filter by source (and choose the source you prefer) or find duplicates based on the begin/end timestamps, which is the preferred method in my opinion, but it involves more work.
I’m attempting to build a multi peer network app and I’ve got everything working for a single conversation between two users however the intention was to build a master- -> detail app like whatsapp where you have a list of conversations and tapping one takes you to the conversation. The problem I’m having is all the housekeeping in maintain multiple sessions.
My structure is that I have a ‘conversation manager’ which has an array of ‘conversations’ which are wrappers for an MCSession that have an array of messages. When a conversation is started (either by inviting a recipient or by accepting an invitation) the conversation object(session) is added to the array, which is the data source for the master table view. When a conversation is selected from the list, in prepare for segue I pass the conversation object to the detail view controller and it’s array of messages become the data source for the detail screen.
I’m having numerous issues trying to get this working, such as messages not being delivered in conversations currently not on screen, keeping all the sessions active, not allowing multiple separate conversations between the same two people etc.
My specific question is that, most of the examples and tutorials, including Apples sample app focus around one conversation, one active session at a time. Am I wasting my time trying to get this working. Ie. Was the framework only designed to accommodate a single active session at a time?
I ran into this curious about the same thing!
I realize this is nearly 3 years old but here's a thought:
If you're using MPCF then you're accepting that these chats are within wifi/bluetooth range. Well, you could accept the limitation of one session at a time and the limitation of up to 7 active chats at any time. You and seven others? 1:1 Then you can just pair those chats up. Each peer could have 7 threads. We can assume that only 8 people have your app open and are within range? I realize this doesn't -completely- solve your problem but it hopefully it gives some direction since I'm not sure another option is possible.
And no answer/help for three years kinda stinks so I'm hoping to pitch in!
If you did find a better answer I'd love to know what you found!