class myTest extends Simulation {
val headers = Map("Authorization" -> "longAuthHeader")
val httpProtocol = http
.baseUrl("http://baseurl.com:8000")
.headers(headers)
val scn = scenario("Scenario Name")
.exec(http("request")
.get("/data/url/"))
setUp(scn.inject(constantUsersPerSec(40) during (2 minutes)))
.protocols(httpProtocol)
.throttle(jumpToRps(40), holdFor(2 minutes))
}
Using the above, I am creating a test using gatling which performs 40 RPS to baseurl.com:8000/data/url and maintains this for 2 minutes.
The problem with above approach is that only one user (as identified by the auth header) is performing the test.
What steps do I need to take to change this test to perform a request with, say, 40 users (40 different auth headers) that perform 1RPS each? So I will have 40 RPS distributed over 40 users rather than 40 RPS from one user.
This is important because my application performs slightly different behaviour based on the user context (different auth headers = different behaviour).
the gatling advanced tutorial describes how to do exactly this
gatling.io/docs/current/advanced_tutorial#advanced-tutorial You're interested the sections on configuring virtual users and feeders –
Related
i have a test automation project that i use Rest Assured. On my tests, i use wiremock for external api calls. I don't have a problem when I run the tests one by one. But when i run the tests in parallel, wiremock cannot match requests on time. After a few seconds, I see that it is matched, but my requests are already timeout.
I did an experiment on a test.
#Test(invocationCount = 10, threadPoolSize = 10)
this annotation runs a test in parallel 10 times. if i run a test 10 times in parallel,
Wiremock lags in matching requests. if i run a test 2 or 3 times in parallel, i don't have a problem.
My stubs are dynamic. There can not be any conflict. There is a sample (i write kotlin)
val id = randomId() //this method create random id
val token = randomToken() // this method create random token
stubGetSpend(id, token)
fun stubGetSpend(id: String, token: String): StubMapping {
return stubFor(
WireMock.get(urlEqualTo("/$id/insights?access_token=$token&fields=spend"))
.willReturn(
aResponse()
.withStatus(HttpStatus.SC_OK)
.withHeader("Content-Type", "application/json")
)
)
}
I would like to get the number of pull requests and issues for a particularly GitHub rep. At the moment the method I'm using is really clumsy.
Using the octokit gem and the following code:
# Builds data that is sent to the API
def request_params
data = { }
# labels example: "bug,invalid,question"
data["labels"] = labels.present? ? labels : ""
# filter example: "assigned" "created" "mentioned" "subscribed" "all"
data["filter"] = filter
# state example: "open" "closed" "all"
data["state"] = state
return data
end
Octokit.auto_paginate = true
github = Octokit::Client.new(access_token: oauth_token)
github.list_issues("#{user}/#{repository}", request_params).count
The data received is extremely big, so its very ineficient in terms of memory. I don't need data regarding the issues only how many are there, X issues ( based on the filters / state / labels ).
I thought of a solution but was not able to implement it.
Basically: do 1 request to get the header, in the header there should be a link to the last page. Then make 1 more request to the last page, and check how many issues are there. Then we can calculate:
count = ( number of pages * (issues-per-page - 1) ) + issues-on-last-page
But I did not found out how to get request header information from octokit Authentificated Client.
If there is a simple way of doing it without octokit, I will happily use it.
Note: I want to fix this issue because the number of pull requests is quite high, and the code above generates R14 errors on Heroku.
Thank You!
I feel an easy way is to use the GitHub API and restrict the number of PRs you want displayed in a page by using the per_page filter. For example: to find out all the PRs of the repo OneGet/oneget you can use.. https://api.github.com/search/issues?q=repo:OneGet/oneget+type:pr&per_page=1. The JSON response has the field "total_count" which gives the count of the total number of PRs. And the response will be relatively light since it will have only one issue listed.
Ref: Search Issues
We have been reading the Definitive guide to form based website authentication with the intention of preventing rapid-fire login attempts.
One example of this could be:
1 failed attempt = no delay
2 failed attempts = 2 sec delay
3 failed attempts = 4 sec delay
etc
Other methods appear in the guide, but they all require a storage capable of recording previous failed attempts.
Blocklisting is discussed in one of the posts in this issue (appears under the old name blacklisting that was changed in the documentation to blocklisting) as a possible solution.
As per Rack::Attack specifically, one naive example of implementation could be:
Where the login fails:
StorageMechanism.increment("bad-login/#{req.ip")
In the rack-attack.rb:
Rack::Attack.blacklist('bad-logins') { |req|
StorageMechanism.get("bad-login/#{req.ip}")
}
There are two parts here, returning the response if it is blocklisted and check if a previous failed attempt happened (StorageMechanism).
The first part, returning the response, can be handled automatically by the gem. However, I don't see so clear the second part, at least with the de-facto choice for cache backend for the gem and Rails world, Redis.
As far as I know, the expired keys in Redis are automatically removed. That would make impossible to access the information (even if expired), set a new value for the counter and increment accordingly the timeout for the refractory period.
Is there any way to achieve this with Redis and Rack::Attack?
I was thinking that maybe the 'StorageMechanism' has to remain absolutely agnostic in this case and know nothing about Rack::Attack and its storage choices.
Sorry for the delay in getting back to you; it took me a while to dig out my old code relating to this.
As discussed in the comments above, here is a solution using a blacklist, with a findtime
# config/initilizers/rack-attack.rb
class Rack::Attack
(1..6).each do |level|
blocklist("allow2ban login scrapers - level #{level}") do |req|
Allow2Ban.filter(
req.ip,
maxretry: (20 * level),
findtime: (8**level).seconds,
bantime: (8**level).seconds
) do
req.path == '/users/sign_in' && req.post?
end
end
end
end
You may wish to tweak those numbers as desired for your particular application; the figures above are only what I decided as 'sensible' for my particular application - they do not come from any official standard.
One issue with using the above that when developing/testing (e.g. your rspec test suite) the application, you can easily hit the above limits and inadvertently throttle yourself. This can be avoided by adding the following config to the initializer:
safelist('allow from localhost') do |req|
'127.0.0.1' == req.ip || '::1' == req.ip
end
The most common brute-force login attack is a brute-force password attack where an attacker simply tries a large number of emails and passwords to see if any credentials match.
You should mitigate this in the application by use of an account LOCK after a few failed login attempts. (For example, if using devise then there is a built-in Lockable module that you can make use of.)
However, this account-locking approach opens a new attack vector: An attacker can spam the system with login attempts, using valid emails and incorrect passwords, to continuously re-lock all accounts!
This configuration helps mitigate that attack vector, by exponentially limiting the number of sign-in attempts from a given IP.
I also added the following "catch-all" request throttle:
throttle('req/ip', limit: 300, period: 5.minutes, &:ip)
This is primarily to throttle malicious/poorly configured scrapers; to prevent them from hogging all of the app server's CPU.
Note: If you're serving assets through rack, those requests may be counted by rack-attack and this throttle may be activated too quickly. If so, enable the condition to exclude them from tracking.
I also wrote an integration test to ensure that my Rack::Attack configuration was doing its job. There were a few challenges in making this test work, so I'll let the code+comments speak for itself:
class Rack::AttackTest < ActionDispatch::IntegrationTest
setup do
# Prevent subtle timing issues (==> intermittant test failures)
# when the HTTP requests span across multiple seconds
# by FREEZING TIME(!!) for the duration of the test
travel_to(Time.now)
#removed_safelist = Rack::Attack.safelists.delete('allow from localhost')
# Clear the Rack::Attack cache, to prevent test failure when
# running multiple times in quick succession.
#
# First, un-ban localhost, in case it is already banned after a previous test:
(1..6).each do |level|
Rack::Attack::Allow2Ban.reset('127.0.0.1', findtime: (8**level).seconds)
end
# Then, clear the 300-request rate limiter cache:
Rack::Attack.cache.delete("#{Time.now.to_i / 5.minutes}:req/ip:127.0.0.1")
end
teardown do
travel_back # Un-freeze time
Rack::Attack.safelists['allow from localhost'] = #removed_safelist
end
test 'should block access on 20th successive /users/sign_in attempt' do
19.times do |i|
post user_session_url
assert_response :success, "was not even allowed to TRY to login on attempt number #{i + 1}"
end
# For DOS protection: Don't even let the user TRY to login; they're going way too fast.
# Rack::Attack returns 403 for blocklists by default, but this can be reconfigured:
# https://github.com/kickstarter/rack-attack/blob/master/README.md#responses
post user_session_url
assert_response :forbidden, 'login access should be blocked upon 20 successive attempts'
end
end
I've been trying to get this to work for probably 6 hours now to no avail, read every stackoverflow question I could find on the topic.
I'm trying to get 100, 200, or maybe 500 photos from a single tag:
func hashtags(hashtag: String, nextMaxTagId: String?) -> RequestParamters {
var params = "/tags/\(hashtag)/media/recent|access_token=\(accessToken)"
var parameters = Dictionary<String, AnyObject>()
parameters["access_token"] = accessToken
let urlString = "https://api.instagram.com/v1/tags/\(hashtag)/media/recent"
if let nextMaxTagId = nextMaxTagId {
params += "|max_tag_id=\(nextMaxTagId)"
parameters["max_tag_id"] = nextMaxTagId
}
let sig = HMAC.signWithKey(C.InstagramClientSecret(), usingData: params)
parameters["sig"] = sig
return (urlString: urlString, parameters: parameters)
}
This is what I use to construct my urls and parameters for my request. My first request does not have a nextMaxTagId, and that request goes through, returns 20 images and a pagination json.
Then, when I extract the next_max_tag_id from the pagination block, and create a request using that parameter, I get another 20 images, but they are the same images as before and now I do not get a pagination block.
I am signing my requests correctly (as all my other API requests throughout the app go through no problem) and I am not in Sandbox mode.
Edit: I've also tried using min_tag_id=\(nextMinTagId), still do not receive pagination in the next request.
Seems like:
1) You are using the Instagram Developer API with what seems like an authorized APIKey, and you mentioned you are NOT in Sandbox, so you're in a the Production environment for that api.
I'm trying to get 100, 200, or maybe 500 photos from a single tag
2) This means, combined with returns 20 images and a pagination json, that for 100, you need to make 5 calls minimum (100/20 == 5), 200 == 10, 500 = 25.
3) According to the developer documentation rate limits, the overall cap on Production is 5000 req/hour, with several APIs restricted to a much smaller limit (some are 30/60 req/hour). I'm not sure I see the exact tag rate limit you are hitting, but since the question mentions:
for probably 6 hours now to no avail
it's also possible you've just been hitting the overall hourly request limit each hour.
I definitely know that this is not an answer that I enjoy giving, because it's essentially saying: you're stuck. I've actually played with the rate limits myself before, and I find them extremely limiting (pun fully intended). The only other option, albeit not as "above board", is to scrape Instagram itself for the information you need. I say it's not as "above board" because if you needed info not found on a web scrape, you could theoretically scrape the mobile API through some minor reverse engineering (ie using an HTTP proxy to spoof mobile traffic systematically).
In the end, the API Instagram publishes is definitely very limited, and will face rate limits for the foreseeable future (unless you can get those somehow lifted in a specific partnership they somehow deem worthy, although I'm not sure how this could be approached).
I'm using rspec to test my application and I'm having a hard time figuring out how to test this. The Slack::Notifier's job is to send a post request to a webhook. Once I call this method in Rspec, I don't know how to see the response. Also, is it possible to match the format of this text to an expected text somewhere? My method is below. Thanks.
def notify
offset = 14400 #UTC to EST
notifier = Slack::Notifier.new Rails.application.secrets.slack_organization_name, Rails.application.secrets.slack_token, channel: "##{Rails.application.secrets.slack_channel}", username: Rails.application.secrets.slack_user_name
notifier.ping(":white_check_mark: *USAGE SUMMARY for #{(Time.now - offset).to_formatted_s(:long) }*")
count = 0
current_time = Time.now.to_i
live_response.each do |r|
if r["properties"]["time"] > ((current_time - offset) - 60) #&& r["properties"]["$initial_referring_domain"] == "capture.com"
notifier.ping("
*Name:* #{r["properties"]["$name"]}
*Event:* #{r["event"]}
*Keywords:* #{r["properties"]["keywords"]}
*Organization:* #{r["properties"]["organizationName"]}
*Email:* #{r["properties"]["$email"]}
*Time:* #{Time.at(r["properties"]["time"] + offset).utc.to_datetime.in_time_zone("Eastern Time (US & Canada)").to_formatted_s(:long_ordinal)}
*More Data:* #{ANALYTICS_URL}#{r["properties"]["distinct_id"]}
__________________________________________________
")
count +=1
end
end
notifier.ping("*There were #{count} events in this report.*")
end
Testing network communications (like API calls) is a tricky thing. Personally I would rely on programming by contract and testing in isolation - i.e. assume the external service is working fine and it responds positively for valid request.
Then you test your client code by checking that you are actually sending a valid request. For this stub the method where control exits your code into a library/system code. For example if you are making a HTTP GET request using a gem like HTTParty, then stub HTTParty.get i.e. HTTParty.stub(:get) and in that stub verify that correct parameters were sent.
On the other side of the spectrum you should also simulated both positive and negative responses from the web service and make sure your client code handles it in expected manner.
If you are making a real then you are introducing a lot of dependencies on your test : a test setup of external service, risk of network issues (timeout, n/w breakdown, etc) problems with external service and may be more.
If you yourself are writing that webservice too then test that one also in isolation, i.e by simulating valid and invalid inputs and making sure they are handled properly. This part is pretty much your controller specs or request specs.
Once again, this is my opinion. Suggestions to do this in a better way and constructive criticism on the shortcomings of this approach are definitely welcome.