Multiple POST requests using Jmeter - post

I have to do a stress test on my application to create 1000 users. In order to create a user I do a POST request using a json:
{
"code": "string",
"domainName": "string",
"enabled": true,
"name": "string"
}
I can't figure out how I am going to create more than one user with jmeter. Is there a for loop? Also how do I get around the fact that code has to be unique so each user would need a unique code?

To create more virtual users just define as many as you like under Thread Group
To send unique data you can replace your code value with i.e. JMeter Function, something like:
{
"code": "${__threadNum}",
"domainName": "string",
"enabled": true,
"name": "string"
}
The above example uses __threadNum() function which basically returns current virtual user number, so code will be 1 for first user, 2 for second user, etc. You can also consider the following alternatives:
__Random() - generates a random string within the given range
__RandomString() - generates a random string from given source data
__UUID() - generates an unique GUI structure
counter() - generates an incrementing number each time being called
See Apache JMeter Functions - An Introduction for more information on JMeter Functions concept.

Yes, there is a Loop Controller and you can load data from CSV within that loop - have a look at this StackOverflow answer

Although using a loop would create your 1000 users, they would not execute at the same time. Assuming your intention is to execute a stress test with 1000 users doing requests concurrently a normal Thread Group would suffice.
You can use the CSV controller (http://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) to set up the different users so each thread has its own user variables. There are other Thread group controllers that you can use if you want more elaborate behavior.

Related

how to create a new skill in twilio flex?

I'm trying to set up a webchat. I need to create different agents with different skills(eg: sales, marketing). I'm not able to find an option to create skills(and how to assign them to respective agents).
As far as I know there is no UI for creating skills. They are arbitrary strings you attach to worker attributes. If you go to task router -> workers -> select a worker you'll see somethings like:
{
"contact_uri": "client:joe_smith",
"full_name": "Joe Smith",
"image_url": "https://www.gravatar.com/avatar/0078cd9b02fc2550990c9c5c8f261c22?d=mp",
"email": "joe#example.com",
"roles": ["admin"],
"routing": { "skills": ["some_skill", "another_skill"] }
}
To add a skill add any string you want to the skills array in the worker attributes.
You can create Workers Skills direct by the Flex UI, accessing https://flex.twilio.com/admin/ > Skills and then create your skills. After create, you can access the https://flex.twilio.com/teams/ > Access Some Worker > Select the desired skills to attach it to the agent.
I hope that it help you.

Sending object as a variable to Mandrill via Rails

I'm converting an email template from Rails to Mandrill, the content for which requires a fair amount of data, some of which is nested through several associations.
Therefore, I'd like to pass objects via Mandrill's global_merge_vars, such as the (simplified) following:
[{ 'name'=>'order', 'content'=> #order.to_json(include:
{ user: { only: :first_name } },
methods: [:method1,
:method2,
:method3,
:method4])
}]
Which passes through to the mandrill template under the order variable similar to the following:
{"id":11,"number":"xxxx","item_total":"112.0"...
"user":{"first_name":"Steve"},"method1":"£0.00","method2":"£112.00",
"method3":"£112.00","method4":"£0.00"}
The problem is, I can't access anything within order (using Handlebars), i.e. {{order.id}}, {{order['id']}} etc. wont work.
It's not an option to break out data into a large number of variables, as some elements are collections and their associations.
I believe the problem occurs as everything is stringified when the variables are compiled for Mandrill -- therefore breaking the JSON object -- with the following a snippet of what is sent across:
"global_merge_vars"=>[{"name"=>"order", "content"=>"{\"id\":11,
\"number\":\"xxxx\",\"item_total\":\"112.0\"...
I can't seem to find any documentation / suggestions for dealing with this, so am wondering whether this it's possible to pass data of this nature, and, if so, how to correctly pass it to be able to access the objects in the Mandrill template. Any advice greatly appreciated!
Steve.
try this:
[{ 'name'=>'order', 'content'=> JSON.parse(#order.to_json(include:
{ user: { only: :first_name } },
methods: [:method1,
:method2,
:method3,
:method4]))
}]

PredictionIO suggest to like items that have already been liked

I'm trying to use PredictionIO recommendation engine in Rails app to suggest items for users to like. So, I have three models: user, product and favorite(user_id, product_id). This is what algorithms.json file looks like:
[
{
"name": "ncMahoutItemBased",
"params": {
"booleanData": true,
"itemSimilarity": "LogLikelihoodSimilarity",
"weighted": false,
"threshold": 0.6,
"nearestN": 10,
"unseenOnly": false,
"freshness" : 0,
"freshnessTimeUnit" : 86400
}
}
]
The things is, after training and deploying, I get a list of suggested items for user and some of which the user has already liked. Why is this?
What is the name for UserBased algorithm instead of "ncMahoutItemBased"?
Thanks.
There is nothing wrong with recommending an item the user has shown a preference for. This is expected behavior in a clothing store, where I always buy Levi's Jeans and they want to remind me of that.
In your case you may not want to recommend items already prefered so filter them out of the recommendations. In most Mahout recommenders this is done for you so PredictionIO must have disabled that feature. Is there some param or config option that tells PredictionIO to filter out a user's preferred items?

Calculating the Count of a related Collection

I have two models Professionals and Projects
Professionals hasMany Projects
Projects belongsTo Professionals
In the Professionals index page i need to show the number of projects the Professional has.
Right now i am doing the following query to get all the Professionals.
How can i fetch the count of the Projects of each of the Professionals as well.
#pros = Professionals.all.asc(:name)
I would add projects_count to Professional
Then
class Project
belongs_to :professional, counter_cache: true
end
And rails will handle the count every time a project is added to or removed from a professional. Then you can just do .projects_count on each professional.
Edit:
If you actually want additonal data
#pros = Professionals.includes(:projects).order(:name)
Then
#pros.each do |pro|
pro.name
pro.projects.each do |project|
project.name
end
end
I am just abstracting here because the rails thing really isn't my bag. But let's talk about schema and things to look for. And as such the code is really just "pseudo-code" but should be close to what is wanted.
Considering "just" how MongoDB is going to store the data, and that you presumably seem to have multiple collections. And I am not saying that is or is not the best model, but just dealing with it.
Let us assume we have this data for "Projects"
{
"_id" : ObjectId("53202e1d78166396592cf805"),
"name": "Project1,
"desc": "Building Project"
},
{
"_id" : ObjectId("532197fb423c37c0edbd4a52")
"name": "Project2",
"desc": "Renovation Project"
}
And that for "Professionals" we might have something like this:
{
"_id" : ObjectId("531e22b7ba53b9dd07756bc8"),
"name": "Steve",
"projects": [
ObjectId("53202e1d78166396592cf805"),
ObjectId("532197fb423c37c0edbd4a52")
]
}
Right. So now we see that the "Professional" has to have some kind of concept that there are related items in another collection and what those related items are.
Now I presume, (and it's not my bag) that there is a way to get down to the lower level of the driver implementation in Mongoid ( I believe that is Moped off the top of my head ) and that it likely ( from memory ) is invoked in a similar way to ( asssuming "Professionals" as the class model name ) :
Professionals.collection.aggregate([
{ "$unwind": "$projects" },
{ "$group": {
"_id": "$_id",
"count": { "$sum": 1 }
}
])
Or in some similar form that is more or less the analog to what you would do in the native mongodb shell. The point being, with something like this you just made the server do the work, rather than pulling all the results to you client and looping through them.
Suggesting that you use native code to iterate results from your data store is counter productive and counter intuitive do using any kind of back end database store. Whether it by a SQL database or a NoSQL database, the general preference is as long as the database has methods to do the aggregation work, then use them.
If you are writing code that essentially pulls every record from your store and then cycles through to get the result then you are doing something wrong.
Use the database methods. Otherwise you might as well just use a text file and be done with it.

Keeping elasticsearch and database in sync

I am trying to figure out a way to keep my mysql db and elasticsearch db in sync. I have setup a jdbc river using the jprante / elasticsearch-river-jdbc plugin for elasticsearch. When I execute the below request:
curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"driver" : "com.mysql.jdbc.Driver",
"url" : "jdbc:mysql://localhost:3306/MY-DATABASE",
"user" : "root",
"password" : "password",
"sql" : "select * from users",
"poll" : "1m"
},
"index" : {
"index" : "test_index",
"type" : "user"
}
}'
the river starts indexing data, but for some records I get org.elasticsearch.index.mapper.MapperParsingException. Well there is discussion related to this issue here, but I want to know a way to get around this issue.
Is it possible to permanently fix this by creating an explicit mapping for all 'fields' of the 'type' that I am trying to index or is there a better way to solve this issue?
Another question that I have is, when the jdbc-river polls the database again, it seems to re-index the entire data-set(given in sql query) again into ES. I am not sure, but is this done because elasticsearch wants to add fresh data as well as update any changes in the existing data? Is it possible to index only the fresh data, if the table's data is static?
Did you look at default mapping?
http://www.elasticsearch.org/guide/reference/mapping/dynamic-mapping.html
I think it can help you here.
If you have an insertion date field in your datatable, you can use it to filter what you have to index.
See https://github.com/jprante/elasticsearch-river-jdbc#time-based-selecting
HTH
David
Elastic Search has dropped the river sync concept at all. It is not a recommended path, because usually it doesn't make sense to keep same normalized SQL table structure in document store like Elastic Search.
Say, you have Product as an entity with some attributes, and Reviews on Product entity as a parent child table as Reviews could be multiple on same table.
Products(Id, name, status,... etc)
Product_reviewes(product_id, review_id)
Reviews(id, note, rating,... etc)
In document store you may want to create a single Index with name say product that includes Product{attribute1, attribute1,... Product reviews[review1, review2,...]}
Here is approach of syncing in such setup.
Assumption:
SQL Database(True Source of record)
Elastic Search or any other NoSql Document Store
Solution:
As soon as Update/updates happens in Publish event/events in JMS/AMQP/Database Queue/File System Queue/Amazon SQS etc. either full Product or primary object ID(I would recommend just ID)
Queue consumer should then call the Web Service to get full object if only Primary ID is pushed to Queue or just take the object it self and send the respective changes to Elastic search/NoSQL database.

Resources