GetWorkItemsAsync fails when it retrieves 1800 workitems - tfs

GetWorkItemsAsync fails when it retrieves 1800 workitems. Example:
int[] ids = (from WorkItem info in wlinks select info.Id).ToArray();
WorkItemTrackingHttpClient tfvcClient = _tfs.GetClient<WorkItemTrackingHttpClient>();
List<Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models.WorkItem> dworkitems = tfvcClient.GetWorkItemsAsync(ids).Result;
If I pass array of Ids with 90 elements it works fine.
Is there any limit that it can get only n number of elements, how can we overcome this problem?

Yes, there is a limitation of the URL length, it will get this exception once the URL length has been exceeded.
So, as a workaround you can limit your calls to a allowed range at a time (e.g. 200 ids at a time). Then call several times for the query.
Unfortunately you’ve hit a limitation of the URL length. Once the URL
length has been exceeded, the server just gets the truncated version,
so odds are high that the truncated work item id is not valid.
I recommend limiting your calls to 200 ids at a time.
Source here :
https://github.com/Microsoft/vsts-dotnet-samples/issues/49
Reference this thread for the limitation of the URL length: What is the maximum length of a URL in different browsers?
This similar thread for your reference: Is there any restriction for number of characters in TFS REST API?

Related

Getting Locust to send a predefined distribution of requests per second

I previously asked this question about using Locust as the means of delivering a static, repeatable request load to the target server (n requests per second for five minutes, where n is predetermined for each second), and it was determined that it's not readily achievable.
So, I took a step back and reformulated the problem into something that you probably could do using a custom load shape, but I'm not sure how – hence this question.
As in the previous question, we have a 5-minute period of extracted Apache logs, where each second, anywhere from 1 to 36 GET requests were made to an Apache server. From those logs, I can get a distribution of how many times a certain requests-per-second rate appeared; e.g. there's a 1/4000 chance of 36 requests being processed on any given second, 1/50 for 18 requests to be processed on any given second, etc.
I can model the distribution of request rates as a simple Python list: the numbers between 1 and 36 appear in it an equal number of times as 1–36 requests per second were made in the 5-minute period captured in the Apache logs, and then just randomly get a number from it in the tick() method of a custom load shape to get a number that informs the (user count, spawn rate) calculation.
Additionally, by using a predetermined random seed, I can make the test runs repeatable to within an acceptable level of variation to be useful in testing my API server configuration changes, since the same random list elements should be retrieved each time.
The problem is that I'm not yet able to "think in Locust", to think in terms of user counts and spawn rates instead of rates of requests received by the server.
The question becomes this:
How do you implement the tick() method of a custom load shape in such a way that the (user count, spawn rate) tuple results in a roughly known distribution of requests per second to be sent, possibly with the help of other configuration options and plugins?
You need to create a Locust User with the tasks you want it to run (e.g. make your http calls). You can define time between tasks to kind of control the requests per second. If you have a task to make a single http call and define wait_time = constant(1) you can roughly get 1 request per second. Locust's spawn_rate is a per second unit. Since you have the data you want to reproduce already and it's in 1 second intervals, you can then create a LoadTestShape class with the tick() method somewhat like this:
class MyShape(LoadTestShape):
repro_data = […]
last_user_count = 0
def tick(self):
self.last_user_count = requests_per_second
if len(self.repro_data) > 0:
requests_per_second = self.repro_data.pop(0)
requests_per_second_diff = abs(last_user_count - requests_per_second)
return (requests_per_second, requests_per_second_diff)
return None
If your first data point is 10 requests, you'd need requests_per_second=10 and requests_per_second_diff=10 to make Locust spin up all 10 users in a single second. If the next second is 25, you'd have requests_per_second=25 and requests_per_second_diff=15. In a Load Shape, spawn_rate also works for decreasing the number of users. So if next is 16, requests_per_second=16 and requests_per_second_diff=9.

Length limit for parameter in TFS API?

I try to get the last builds of some build definitions in my TFS (Team Foundation Server) with:
project/_apis/build/builds?definitions=1000,1001&queryOrder=queueTimeDescending&minTime=2020-05-03T00:00:00
This works until the string for "definitions" reach a limit of 440 definitions or 1984 characters.
Then I get a 404-Error on the request.
Is there such an (undocumented) limit in the number of definitions or length for the parameter string?
It is not clearly state how long url can be, but it looks that you reached the limit. However according to this What is a safe maximum length a segment in a URL path should be? it is good practice to do not extend 2000 characters. I see that you count almost 2K, so this maybe is your case.
You can also check this topic on developer community. There were discussion about this with conclusion:
At present, you can only reduce the length of URL.

Hazelcast Client Memory Leak

We have Spring Boot 2.0.4 application. We use distributed Hazelcast 3.11 cache. In our application we configured HazelcastClient which connects to a Hazelcast server in Docker container.
In cache we store different "persons" in one map and the same "persons" but as a list in another (~900 persons in one list by one key; these persons in both maps are not the same for 100%, they both describe the person in real life but the last one in the list have less properties.). All the maps are of BINARY type.
When we made stress tests to get person by random id from the cache (1st map), everything went excellent. 5000 concurrent requests didn't influence our application HEAP at all, 10000 - slightly. In JSON format one person details has the size of 10kB.
When we made stress tests to get the list of persons from the cache (2nd map) we faced problems with the HEAP of our application where the client is configured. We made just 500 concurrent requests and the HEAP grew to 4Gb size! In JSON format the list has the size of 800kB. It is stored in the 2nd map and was requested by the same key 500 times.
Does anybody know what is going on?
DTO
Controller
Method of a Facade which is retrieved from the Controller, and where caching takes place via #Cacheable annotation
HazelcastInstance configuration
hazelcast.xml configuration for the server side
500 concurrent requests (3 times in a row)
Heap, Classes
UPDATED:
I made 500 concurrent requests sequentially 23 times. Below we can see the final minutes of the test.
Telemetries Overview
#Nicolay, correct me if I'm wrong:
the second map contains lists of people, ~900 people, as an entry. You mentioned each person is ~10KB, so each entry in the second map is ~9MB, even though you're saying it's 800KB in Json format. Can you please check the size of entries in the second map through Hazelcast. like: client.getMap(map_name).getEntryView(key).getCost(). This will give you entry memory cost in bytes.
500 concurrent req, if each entry is ~9MB, will require 4.5GB additional heap, which matches what you observed.
By looking numbers, everything seems fine, other that Json size being 800KB.
Can you check those numbers?

Maxiumum number of custom events in Flurry analytics?

What is the maximum number of custom events you can report per session with Flurry analytics?
The number of events you can report per session in Flurry is 1000. I asked this question to Flurry support, as I couldn't find it elsewhere (and none of the answers here really answered the question). They answered and also sent me a short document titled "Flurry Methodology and Best Practices" that contained, among other things, this summary:
300 unique events per app
1000 events max per session
100 unique event names per session
10 params max per event
Key/Value for params must be Strings
Key max length 255 chars
Value max length 255 chars
As the definition of "session" is important, I quote, from the same document:
Flurry analytics is based on a session model that only “phones home”
at the launch and backgrounding of a session. This prevents
“talkiness” from the SDK, conserves battery by not always running the
radios and allows data to travel in a coherent package."
(...)
One addition to the Flurry session model is the concept that a user
may bounce out of an app for a very brief time and reenter the app and
still be within the original session. A time may be set, in millis,
that is colloquially referred to as the “session timeout”. It is
configurable at app launch (see setContinueSessionMillis for more
details) within a range of 5 seconds to 1 minute with a default of 10
seconds. If, when the user comes back into the app, the “session
timeout” has not been exceeded, then the SDK will treat the “new”
session as a continuation of the former session.
Upon this new launch, if there are any sessions not sent, they will be
sent. At that time, as well, the SDK will make a decision on whether
or not to continue a session or start a new one.
The document is here. Flurry support sent it to me in late February, 2015.
The limit appears to be 300 different event ids, and therefore 300 custom events. Quoting: http://support.flurry.com/index.php?title=Analytics/GettingStarted/TechnicalQuickStart
Your application is currently limited to counting occurrences for 300
different Event ids (maximum length 255 characters).
Addional details from here
Yes, there is a limit of 300 Events for each application. Each event
can have up to 10 parameters, and each parameter can have any number
of values.
I believe it is infinite:
Each Event can have up to 10 parameters, and each parameter can have
an infinite number of values associated with it. For example, for the
‘Author’ parameter, there may be 1,000 possible authors who wrote an
article. We can keep track of each author via this single parameter.
So if you can have an infinite number of values you could have 10 million authors. Since they are all just values each one can be tracked (via the parameter). If they "can keep track of each author via this single parameter" then I don't think your event count would be mitigated. This would assume you setup your event types properly like in their example:
NSDictionary *articleParams =
[NSDictionary dictionaryWithObjectsAndKeys:
#"John Q", #"Author", // Capture author info
#"Registered", #"User_Status", // Capture user status
nil];
[Flurry logEvent:#"Article_Read" withParameters:articleParams];
One event with a maximum of 10 dictionary items, with an infinite number of possible values... I think it would be safe to say you aren't limited here.
There is a limit of 300 Events for each app. Each event can have up to 10 parameters, and each parameter can have any number of values. Please check all details here

Amazon SimpleDB: Response messages don't agree with the request parameters

I'm making a simple high scores database for an iPhone game using Amazon's SimpleDB and am running into some strange issues where SimpleDB's response messages don't seem to line up with the requests I'm sending or even the state of the data on the server.
The expected sequence of events for submitting high scores in the app is:
A PutAttributes request is created
that tries to overwrite the current
score with the new value but only if
it is greater than the last known
value of the score.
If the expected value doesn't match the value on the server, SimpleDB's response message lets the app know what the actual value is and a new request is created using it as the new expected value.
This process continues until either
the response states that everything
was OK or until the score on
the server comes back as higher than
the score we're trying to submit
(i.e. if somebody with a higher
score submitted while this
back and forth was going on)
(In case it's relevant I'm using the ASIHTTPRequest class to handle the requests and I've explicitly turned off caching by setting each request's cache policy to ASIIgnoreCachePolicy when I create them.)
However, what's actually happening is a bit strange...
The first response comes back with the expected result. For example, the app submits a score of 200 and expects the score on the server to be 0 but it's actually 100. SimpleDB responds that the conditional check failed and lets the app know the actual value on the server (100).
The app sends a request with an updated expected value but SimpleDB responds with an identical response as the first time even though the expected value was changed (e.g. the response says the actual value is 100 and the expected value we passed in was 0 even though we had just changed it to 100).
The app sends a third request with the exact same score/expected values as the second request (e.g. 100 for both) and SimpleDB reports that the condition failed again because the actual value is 200.
So it looks like the second attempt actually worked even though SimpleDB reported a failure and gave an incorrect account of the parameters I had passed in. This odd behavior is also very consistent - every time I try to update a score with an expected value that doesn't match the one on the server the exact same sequence occurs.
I've been scratching my head at this for a while now and I'm flat out of ideas so if anyone with more SimpleDB experience than me could shed some light on this I'd be mighty grateful.
Below is a sample sequence of requests and responses in case that does a better job of describing the situation than my tortured explanation above (these values taken from actual requests and responses but I've edited out the non-relevant parts of the requests).
Request 1
(The score on the server is 100 at this point)
Attribute.1.Name=Score
Attribute.1.Replace=true
Attribute.1.Value=200
Expected.1.Name=Score
Expected.1.Value=000
Consistent=true
Response 1
Conditional check failed. Attribute (Score) value is (100) but was expected (000)
Request 2
(The app updates to proper score but based on the response SimpleDB seems to ignore the changes)
Attribute.1.Name=Score
Attribute.1.Replace=true
Attribute.1.Value=200
Expected.1.Name=Score
Expected.1.Value=100
Consistent=true
Response 2
Conditional check failed. Attribute (Score) value is (100) but was expected (000)
Request 3
(This time SimpleDB gets the expected value right but also reports that the score has been updated even though all previous responses indicated otherwise)
Attribute.1.Name=Score
Attribute.1.Replace=true
Attribute.1.Value=200
Expected.1.Name=Score
Expected.1.Value=100
Consistent=true
Response 3
Conditional check failed. Attribute (Score) value is (200) but was expected (100)
Update (10/21/10)
I checked to make sure that the requestIDs that are being returned from the server are all unique and indeed they are.
Try passing ConsistentRead=true in your requests.

Resources