how to create a new skill in twilio flex? - twilio

I'm trying to set up a webchat. I need to create different agents with different skills(eg: sales, marketing). I'm not able to find an option to create skills(and how to assign them to respective agents).

As far as I know there is no UI for creating skills. They are arbitrary strings you attach to worker attributes. If you go to task router -> workers -> select a worker you'll see somethings like:
{
"contact_uri": "client:joe_smith",
"full_name": "Joe Smith",
"image_url": "https://www.gravatar.com/avatar/0078cd9b02fc2550990c9c5c8f261c22?d=mp",
"email": "joe#example.com",
"roles": ["admin"],
"routing": { "skills": ["some_skill", "another_skill"] }
}
To add a skill add any string you want to the skills array in the worker attributes.

You can create Workers Skills direct by the Flex UI, accessing https://flex.twilio.com/admin/ > Skills and then create your skills. After create, you can access the https://flex.twilio.com/teams/ > Access Some Worker > Select the desired skills to attach it to the agent.
I hope that it help you.

Related

Multiple POST requests using Jmeter

I have to do a stress test on my application to create 1000 users. In order to create a user I do a POST request using a json:
{
"code": "string",
"domainName": "string",
"enabled": true,
"name": "string"
}
I can't figure out how I am going to create more than one user with jmeter. Is there a for loop? Also how do I get around the fact that code has to be unique so each user would need a unique code?
To create more virtual users just define as many as you like under Thread Group
To send unique data you can replace your code value with i.e. JMeter Function, something like:
{
"code": "${__threadNum}",
"domainName": "string",
"enabled": true,
"name": "string"
}
The above example uses __threadNum() function which basically returns current virtual user number, so code will be 1 for first user, 2 for second user, etc. You can also consider the following alternatives:
__Random() - generates a random string within the given range
__RandomString() - generates a random string from given source data
__UUID() - generates an unique GUI structure
counter() - generates an incrementing number each time being called
See Apache JMeter Functions - An Introduction for more information on JMeter Functions concept.
Yes, there is a Loop Controller and you can load data from CSV within that loop - have a look at this StackOverflow answer
Although using a loop would create your 1000 users, they would not execute at the same time. Assuming your intention is to execute a stress test with 1000 users doing requests concurrently a normal Thread Group would suffice.
You can use the CSV controller (http://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) to set up the different users so each thread has its own user variables. There are other Thread group controllers that you can use if you want more elaborate behavior.

PredictionIO suggest to like items that have already been liked

I'm trying to use PredictionIO recommendation engine in Rails app to suggest items for users to like. So, I have three models: user, product and favorite(user_id, product_id). This is what algorithms.json file looks like:
[
{
"name": "ncMahoutItemBased",
"params": {
"booleanData": true,
"itemSimilarity": "LogLikelihoodSimilarity",
"weighted": false,
"threshold": 0.6,
"nearestN": 10,
"unseenOnly": false,
"freshness" : 0,
"freshnessTimeUnit" : 86400
}
}
]
The things is, after training and deploying, I get a list of suggested items for user and some of which the user has already liked. Why is this?
What is the name for UserBased algorithm instead of "ncMahoutItemBased"?
Thanks.
There is nothing wrong with recommending an item the user has shown a preference for. This is expected behavior in a clothing store, where I always buy Levi's Jeans and they want to remind me of that.
In your case you may not want to recommend items already prefered so filter them out of the recommendations. In most Mahout recommenders this is done for you so PredictionIO must have disabled that feature. Is there some param or config option that tells PredictionIO to filter out a user's preferred items?

Calculating the Count of a related Collection

I have two models Professionals and Projects
Professionals hasMany Projects
Projects belongsTo Professionals
In the Professionals index page i need to show the number of projects the Professional has.
Right now i am doing the following query to get all the Professionals.
How can i fetch the count of the Projects of each of the Professionals as well.
#pros = Professionals.all.asc(:name)
I would add projects_count to Professional
Then
class Project
belongs_to :professional, counter_cache: true
end
And rails will handle the count every time a project is added to or removed from a professional. Then you can just do .projects_count on each professional.
Edit:
If you actually want additonal data
#pros = Professionals.includes(:projects).order(:name)
Then
#pros.each do |pro|
pro.name
pro.projects.each do |project|
project.name
end
end
I am just abstracting here because the rails thing really isn't my bag. But let's talk about schema and things to look for. And as such the code is really just "pseudo-code" but should be close to what is wanted.
Considering "just" how MongoDB is going to store the data, and that you presumably seem to have multiple collections. And I am not saying that is or is not the best model, but just dealing with it.
Let us assume we have this data for "Projects"
{
"_id" : ObjectId("53202e1d78166396592cf805"),
"name": "Project1,
"desc": "Building Project"
},
{
"_id" : ObjectId("532197fb423c37c0edbd4a52")
"name": "Project2",
"desc": "Renovation Project"
}
And that for "Professionals" we might have something like this:
{
"_id" : ObjectId("531e22b7ba53b9dd07756bc8"),
"name": "Steve",
"projects": [
ObjectId("53202e1d78166396592cf805"),
ObjectId("532197fb423c37c0edbd4a52")
]
}
Right. So now we see that the "Professional" has to have some kind of concept that there are related items in another collection and what those related items are.
Now I presume, (and it's not my bag) that there is a way to get down to the lower level of the driver implementation in Mongoid ( I believe that is Moped off the top of my head ) and that it likely ( from memory ) is invoked in a similar way to ( asssuming "Professionals" as the class model name ) :
Professionals.collection.aggregate([
{ "$unwind": "$projects" },
{ "$group": {
"_id": "$_id",
"count": { "$sum": 1 }
}
])
Or in some similar form that is more or less the analog to what you would do in the native mongodb shell. The point being, with something like this you just made the server do the work, rather than pulling all the results to you client and looping through them.
Suggesting that you use native code to iterate results from your data store is counter productive and counter intuitive do using any kind of back end database store. Whether it by a SQL database or a NoSQL database, the general preference is as long as the database has methods to do the aggregation work, then use them.
If you are writing code that essentially pulls every record from your store and then cycles through to get the result then you are doing something wrong.
Use the database methods. Otherwise you might as well just use a text file and be done with it.

Keeping elasticsearch and database in sync

I am trying to figure out a way to keep my mysql db and elasticsearch db in sync. I have setup a jdbc river using the jprante / elasticsearch-river-jdbc plugin for elasticsearch. When I execute the below request:
curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"driver" : "com.mysql.jdbc.Driver",
"url" : "jdbc:mysql://localhost:3306/MY-DATABASE",
"user" : "root",
"password" : "password",
"sql" : "select * from users",
"poll" : "1m"
},
"index" : {
"index" : "test_index",
"type" : "user"
}
}'
the river starts indexing data, but for some records I get org.elasticsearch.index.mapper.MapperParsingException. Well there is discussion related to this issue here, but I want to know a way to get around this issue.
Is it possible to permanently fix this by creating an explicit mapping for all 'fields' of the 'type' that I am trying to index or is there a better way to solve this issue?
Another question that I have is, when the jdbc-river polls the database again, it seems to re-index the entire data-set(given in sql query) again into ES. I am not sure, but is this done because elasticsearch wants to add fresh data as well as update any changes in the existing data? Is it possible to index only the fresh data, if the table's data is static?
Did you look at default mapping?
http://www.elasticsearch.org/guide/reference/mapping/dynamic-mapping.html
I think it can help you here.
If you have an insertion date field in your datatable, you can use it to filter what you have to index.
See https://github.com/jprante/elasticsearch-river-jdbc#time-based-selecting
HTH
David
Elastic Search has dropped the river sync concept at all. It is not a recommended path, because usually it doesn't make sense to keep same normalized SQL table structure in document store like Elastic Search.
Say, you have Product as an entity with some attributes, and Reviews on Product entity as a parent child table as Reviews could be multiple on same table.
Products(Id, name, status,... etc)
Product_reviewes(product_id, review_id)
Reviews(id, note, rating,... etc)
In document store you may want to create a single Index with name say product that includes Product{attribute1, attribute1,... Product reviews[review1, review2,...]}
Here is approach of syncing in such setup.
Assumption:
SQL Database(True Source of record)
Elastic Search or any other NoSql Document Store
Solution:
As soon as Update/updates happens in Publish event/events in JMS/AMQP/Database Queue/File System Queue/Amazon SQS etc. either full Product or primary object ID(I would recommend just ID)
Queue consumer should then call the Web Service to get full object if only Primary ID is pushed to Queue or just take the object it self and send the respective changes to Elastic search/NoSQL database.

Ejabber structures and roster

I'm a new to ejabberd but the first thing I noticed is the completely absence of documentation and code comments.
I have many doubts, but the main are:
inside the record jid what is the difference between user and luser, server and lserver, ... and ...?
-record(jid, {user, server, resource,
luser, lserver, lresource}).
what is useful for the record iq?
-record(iq, {id = "",
type,
xmlns = "",
lang = "",
sub_el}).
what is a subscription inside ejabber? a relation between two users?
what is the jid inside the roster?
I know that these questions can be also quite stupid, but I don't really know how to understand without asking, thanks
what is the difference between user and luser?
luser,lserver and lresource are the corresponding parts of the jid after being processed with the appropiate stringprep profile. See https://www.rfc-editor.org/rfc/rfc3920#section-3 . In short, inside ejabberd you will most likely always use the processed versions, and the raw ones only when serializing the JID back to the wire.
what is useful for the record iq?
it make it easier to match on the IQ namespace, id or type (get|set|error) than to retrieve that info from the underling xml each time.
what is a subscription inside ejabber? a relation between two users?
basically, yes. A subscription from user A to user B means A is interested in B presence. But the subscription can be in different states (as the other user has to accept it, etc.). See http://xmpp.org/rfcs/rfc3921.html#sub .
what is the jid inside the roster?
sorry, didn't understand you on that, what do you want to know?

Resources