How can I hit multiple apis like example.com/1000/getUser, example.com/1001/getUser in Gatling? They are Get calls.
Note: Numbers start from a non zero integer.
Hard to give good advice based on the small amount of information in your question, but I'm guessing that passing the userID's in with a feeder could be a simple, straightforward solution. Largely depends on how your API-works, what kind of tests you're planning, and how many users (I'm assuming the numbers are userId's) you need to test with.
If you need millions of users, a custom feeder that generates increments would probably be better, but beyond that the strategy would otherwise be the same. I advice you to read up on the feeder-documentation for more information both on usage in general, and how to make custom feeders: https://gatling.io/docs/3.0/session/feeder/
As an example, if you just need a relatively small amount of users, something along these lines could be a simple, straightforward solution:
Make a simple csv file (for example named userid.csv) with all your userID's and add it to the resources folder:
userid
1000
1001
1002
...
...
The .feed() step adds one value from the csv-file to your gatling user session, which you can fetch as you would work with session values ordinarily. Each of the ten users injected in this example will get an increment from the csv-file.
setUp(
scenario("ScenarioName")
.feed(csv("userid.csv"))
.exec{http("Name of your request").get("/${userid}/getUser")}
)
.inject(
atOnceUsers(10)
)
).protocols(http.baseUrl("example.com"))
Related
I am working on an application which will generate unique random numbers and then store them into a database. I will check if a number exists through a HTTP request. Initially, for getting started, I would use around 10,000 numbers.
Is this the right approach?
Generate a random number, and, one by one, store them into an array and continue checking for array uniqueness, and when the array is complete, store the whole array to the database after sorting it.
Use the database and check to see if a number exists or not.
Which database should I use, as the application can scale up to 1 million numbers.
It may be more efficient, particularly if you want to generate 1000000 numbers, to make them one at a time and use validations in the model/database prevent duplicates.
As regards choosing a database, it will depend a little on you intended application. There is some info here: Which is the Best database for Rails application?
I can't comment on using a database directly from ruby without rails because I have not done that. One of the big pluses for rails for me is how easy it makes creating apps that use a database.
A couple thoughts:
If you are storing 10 or 10,000 "random" numbers, what difference does it make whether they are random going into the database, or if the database randomly picks one number of a range of 10,000 sequential numbers? Do you need doubly-random number selections? MySQL, PostgreSQL and other DBMs can generate random numbers, and you can use their random number generator to retrieve a row, so you could either have it return a value directly from its generator, or grab a row. Either way, you don't need to worry about Ruby creating a random value -- unless you really want "triplely"-random numbers. I'd just stick the values of a (1..10_000) range into the database and call that part done and work on a query to grab records randomly.
If you want truly random numbers, you can't guarantee uniqueness. If you're happy with pseudo-random, you still have a problem because you could end up returning duplicates from inside the range unless you track which numbers you've used previously for a particular session. How you track uniqueness across a bunch of sessions is going to be an interesting problem if your site gets popular.
If I was doing this, I'd reverse some of the process. I wouldn't store the "random" values in the database, I'd use Ruby's built-in random number generator, and then probably check the database to see if I'd previously generated that number for that particular session. Overall, fewer values would be stored in the database so lookups to determine uniqueness would happen faster.
That would still be an awkward system to code and would grow inefficient over time as the "unique" records for sessions grew.
To do this without a database I'd create the random/unique range using something like: array = (1..10_000).to_a.shuffle, then each time I needed a value I'd use pop to pull the last value from the randomized array. I'd be tempted to pull from that pool of values for all sessions until it was exhausted, then regenerate it. There'd be a possibility of duplicate "unique" values at that point, but there should be a pretty small chance of the same number reappearing twice in a row.
For an upcoming project we need to have unique real world identifiers that are exposed to users for things like Account Numbers or Case Numbers (like a bug tracking ID). These will always be system generated and unchangeable. Right now we plan to run strictly on Heroku.
While (as my name would suggest) I am new to the wonderfulness that is Ruby on Rails, I have a long background in enterprise application development. I'm trying to bridge between what I have done in the past while doing in the "RoR way"
Obviously RoR has wonderful primary key support. I have read dozens of posts here recommending to adapt business requirements to just use the out of the box id/key methodology.
So let me describe what I am trying to accomplish and please let me know if you have faced similar objectives and what approach you took.
1) Would like to have a human readable key with a consistent length. There is value in always having an Account ID or Transaction ID that is the same length (for form validation, training sales staff, etc.) Using Ruby's innate key generation one could just add buffer characters (e.g. 100000 instead of 1).
2) Compactness: My initial plan was to go with a base 36 unique key (e.g. 36 values [0..9],[a..z]). As part of our API/interface we plan on exposing certain non-confidential objects based on a shortform URL (e.g. xx.co/000001). I like the idea of being able to have a five character identifier in base 36 vs. 7+ in decimal.
So I can think of two possible approaches:
a) add my own field and develop my own unique key generator (or maybe someone will point me to one).
b) Pad leading digits (and I assume I can force the unique key generation to start at 1xxxxxxx rather than 0000001). Then use the to_s(36) method to convert it to and from base 36 for all interactions with humans. Maybe even store the actual ID value in the database in the base 36 format to avoid ongoing conversions, but always do the conversion before a query to avoid the need to have another index.
I'm leaning towards approach B, as it seems like it would be optimal from a DB performance standpoint and that it would require the least investment in non-value added overhead. Once again, any real world experience with these topics and thoughts on the best approach would be greatly appreciated.
Thanks in advance!
I would never use the primary key in a Rails table for anything of business importance. There will come a day when someone on the business end will want to change it, and it'll end up being an enormous pain in the butt and will invalidate a bunch of URLs you and your users thought were canonical and will mess up all your foreign keys and blah blah blah. It's just a really bad idea and I would encourage you not to do it.
The Rails way to do this is have a new column, called something like number or bug_tracking_number or whatever strikes your fancy, and before_validation implement a callback that gives it a value. This is where you can let your creativity shine; something like this sounds like what you want:
before_validation( :on => :create ) do
self.number = CaseNumber.count + 1
end
You can pad the number there, ensure its uniqueness, or do whatever else you want.
I want to perform some simple calculations while staying database-agnostic in my rails app.
I have three models:
.---------------. .--------------. .---------------.
| ImpactSummary |<------| ImpactReport |<----------| ImpactAuction |
`---------------'1 *`--------------'1 *`---------------'
Basicly:
ImpactAuction holds data about... auctions (prices, quantities and such).
ImpactReport holds monthly reports that have many auctions as well as other attributes ; it also shows some calculation results based on the auctions.
ImpactSummary holds a collection of reports as well as some information about a specific year, and also shows calculation results based on the two other models.
What i intend to do is to store the results of these really simple calculations (just means, sums, and the like) in the relevant tables, so that reading these would be fast, and in a way that i can easilly perform queries on the calculation results.
is it good practice to store calculation results ? I'm pretty sure that's not a very good thing, but is it acceptable ?
is it useful, or should i not bother and perform the calculations on-the-fly?
if it is good practice and useful, what's the better way to achieve what i want ?
Thats the tricky part.At first, i implemented a simple chain of callbacks that would update the calculation fields of the parent model upon save (that is, when an auction is created or updated, it marks some_attribute_will_change! on its report and saves it, which triggers its own callbacks, and so on).
This approach fits well when creating / updating a single record, but if i want to work on several records, it will trigger the calculations on the whole chain for each record... So i suddenly find myself forced to put a condition on the callbacks... depending on if i have one or many records, which i can't figure out how (using a class method that could be called on a relation? using an instance attribute #skip_calculations on each record? just using an outdated field to mark the parent records for later calculation ?).
Any advice is welcome.
Bonus question: Would it be considered DB agnostic if i implement this with DB views ?
As usual, it depends. If you can perform the calculations in the database, either using a view or using #find_by_sql, I would do so. You'll save yourself a lot of trouble: you have to keep your summaries up to date when you change values. You've already met the problem when updating multiple rows. Having a view, or a query that implements the view stored as text in ImpactReport, will allow you to always have fresh data.
The answer? Benchmark, benchmark, benchmark ;)
I have a table in my database that stores event totals, something like:
event1_count
event2_count
event3_count
I would like to transition from these simple aggregates to more of a time series, so the user can see on which days these events actually happened (like how Stack Overflow shows daily reputation gains).
Elsewhere in my system I already did this by creating a separate table with one record for each daily value - then, in order to collect a time series you end up with a huge database table and the need to query 10s or 100s of records. It works but I'm not convinced that it's the best way.
What is the best way of storing these individual events along with their dates so I can do a daily plot for any of my users?
When building tables like this, the real key is having effective indexes. Test your queries with the EXAMINE statement or the equivalent in your database of choice.
If you want to build summary tables you can query, build a view that represents the query, or roll the daily results into a new table on a regular schedule. Often summary tables are the best way to go as they are quick to query.
The best way to implement this is to use Redis. If you haven't worked before with Redis I suggest you to start. You will be surprised how fast this can get :). The way I would do such a thing is to use the Hash data structure Redis provides. Just assign every user to his Hash (making a unique key for every user like "user:23:counters"). Inside this Hash you can store a daily timestamp as "05/06/2011" as the field and increment its counter every time an event happens or whatever you want to do with that!
A good start would be this thread. It has a simple, beginner level solution. Time Series Starter. If you are ok with rails models: This is a way it could work. For a sol called "irregular" time series. So this is a event here and there, but not in a regular interval. Like a sensor that sends data when your door is opened.
The other thing, and that is what I was looking for in this thread is regular time series db: Values come at a interval. Say 60/minute aka 1 per second for example a temperature sensor. This all boils down to datasets with "buckets" as you are suspecting right: A time series table gets long, indexes suck at a point etc. Here is one "bucket" approach using postgres arrays that would a be feasible idea.
Its not done as "plug and play" as far as I researched the web.
I am using Ruby on Rails and have a situation that I am wondering if is appropriate for using some sort of Key Value Store instead of MySQL. I have users that have_many lists and each list has_many words. Some lists have hundreds of words and I want users to be able to copy a list. This is a heavy MySQL task b/c it is going to have to create these hundreds of word objects at one time.
As an alternative, I am considering using some sort of key value store where the key would just be the word. A list of words could be stored in a text field in mysql. Each list could be a new key value db? It seems like it would be faster to copy a key value db this way rather than have to go through the database. It also seems like this might be faster in general. Thoughts?
The general way to solve this using a relational database would be to have a list table, a word table, and a table-words table relating the two. You are correct that there would be some overhead, but don't overestimate it; because table structure is defined, there is very little actual storage overhead for each record, and records can be inserted very quickly.
If you want very fast copies, you could allow lists to be copied-on-write. Meaning a single list could be referred to by multiple users, or multiple times by the same user. You only actually duplicate the list when the user tries to add, remove, or change an entry. Of course, this is premature optimization, start simple and only add complications like this if you find they are necessary.
You could use a key-value store as you suggest. I would avoid trying to build one on top of a MySQL text field in less you have a very good reason, it will make any sort of searching by key very slow, as it would require string searching. A key-value data store like CouchDB or Tokyo Cabinet could do this very well, but it would most likely take up more space (as each record has to have it's own structure defined and each word has to be recorded separately in each list). The only dimension of performance I would think would be better is if you need massively scalable reads and writes, but that's only relevant for the largest of systems.
I would use MySQL naively, and only make changes such as this if you need the performance and can prove that this method will actually be faster.