Why test performance become worse when worker count increasing? - playwright

I have 4 kinds of test cases with the same steps for each type. When I use 1 worker, the case duration looks better. When I tried to increase the worker number to 2, 4, and 6, the case duration became longer. Some case duration is doubled. Here is the average duration table. My question is why the perf become worse when worker count increased. It is not related to the CPU core. I run the case on a 10-core windows PC.
Performance compare

It may be that beforeAll is running more often. See: https://github.com/microsoft/playwright/issues/13532#issuecomment-1099626840

Related

Finding optimum CPU limit for docker containers in Splunk

I'm using Splunk to monitor my applications.
I also store resource statistics in my Splunk too.
Goal: I want to find the optimum CPU limit for each container.
How to I write a query that finds an optimum CPU limit? Or the other question is Should I?
Concern1: When I start customizing my query and let's say that I have used MAX(CPU) command. It doesn't mean that my container will be running at level most of the time. So, I might set an unnecessary high limit for my containers.
Let me explain, when I find a CPU limit value via MAX(CPU) command as 10, this top value might be happened because of a bulk operation. So, my container's expected resource may be around 1.2 all the time, except this single 1 operation that one. So, using MAX value won't work.
Concern2: Let's say that I have used the value of AVG(CPU) value and used it. And that is 2, So how many of my operations will be waited for how many minutes after this change? Or how many of them are going to be timed out? It may create a lot of side-effects. How will I decide the real average value? What parameters should be used?
Is it possible to include such conditions in the query? Or do I need an AI to decide it? :)
Here are my givin parameters:
path=statistics.cpus_system_time_secs
path=statistics.cpus_user_time_secs
path=statistics.cpus_nr_periods
path=statistics.cpus_nr_throttled
path=statistics.cpus_throttled_time_secs
path=statistics.cpus_limit
I bet you can ask better questions than me. Let's discuss.
"Optimum" is going to depend greatly on your own environment (resources available, application priority, etc)
You probably want to look at a combination of the following factors:
avg(CPU)
max(CPU) (and time spent there)
min(CPU) (and time spent there)
I suspect your "optimum" limit is going to be a % below your max...but only if you're spending 'a lot' of time maxxed-out
And, of course, being "maxed" may not matter, if other containers are running acceptably
Keep in mind, once you set that limit, your max will drop (as, likely, will your avg)

How to space out influxdb continuous query execution?

I have many influxdb continuous queries(CQ) used to downsample data over a period of time on several occasions. At one point, the load became high and influxdb went to out of memory at the time of executing continuous queries.
Say I have 10 CQ and all the 10 CQ execute in influxdb at a time. That impacts the memory heavily. I am not sure whether there is any way to evenly space out or have some delay in executing each CQ one by one. My speculation is executing all the CQ at the same time makes a influxdb crash. All the CQ are specified in influxdb config. I hope there may be a way to include time delay between the CQ in the influx config. I didn't know exactly how to include the time delay in the config. One sample CQ:
CREATE CONTINUOUS QUERY "cq_volume_reads" ON "metrics"
BEGIN
SELECT sum(reads) as reads INTO rollup1.tire_volume FROM
"metrics".raw.tier_volume GROUP BY time(10m),*
END
And also I don't know whether this is the best way to resolve the problem. Any thoughts on this approach or suggesting any better approach will be much appreciated. It would be great to get suggestions in using debugging tools for influxdb as well. Thanks!
#Rajan - A few comments:
The canonical documentation for CQs is here. Much of what I'm suggesting is from there.
Are you using back-referencing? I see your example CQ uses GROUP BY time(10m),* - the * wildcard is usually used with backreferences. Otherwise, I don't believe you need to include the * to indicate grouping by all tags - it should already be grouped by all tags.
If you are using backreferences, that runs the CQ for each measurement in the metrics database. This is potentially very many CQ executions at the same time, especially if you have many CQ defined this way.
You can set offsets with GROUP BY time(10m, <offset>) but this also impacts the time interval used for your aggregation function (sum in your example) so if your offset is 1 minute then timestamps will be a sum of data between e.g. 13:11->13:21 instead of 13:10 -> 13:20. This will offset execution but may not work for your downsampling use case. From a signal processing standpoint, a 1 minute offset wouldn't change the validity of the downsampled data, but it might produce unwanted graphical display problems depending on what you are doing. I do suggest trying this option.
Otherwise, you can try to reduce the number of downsampling CQs to reduce memory pressure or downsample on a larger timescale (e.g. 20m) or lastly, increase the hardware resources available to InfluxDB.
For managing memory usage, look at this post. There are not many adjustments in 1.8 but there are some.

Is it possible to raise the sampling frequency of perf stat?

I am using perf for profiling, but the number of monitored PMU events is higher than the number of the hardware counters, so round-robin multiplexing strategy is triggered. However, some of my test cases may run less than a millisecond, which means that if the execution time is less than the multiplicative inverse of the default switch frequency (1000Hz), some events may not be profiled.
How to raise the sampling frequency of perf stat like perf record -F <freqency> to make sure that every events will be recorded even if the measurement overhead may slightly increase?
First off, remember that sampling is different than counting.
perf record will invariably do a sampling of all the events that occured during the time period of profiling. This means that it will not count all of the events that happened (this can be tweaked of course!). You can modify the frequency of sample collection to increase the number of samples that get collected. It will usually be like for every 10 (or whatever number > 0) events that occur, perf record will only record 1 of them.
perf stat will do a counting of all the events that occur. For each event that happens, perf stat will count it and will try not to miss any, unlike sampling. Of course, the number of events counted may not be accurate if there is multiplexing involved (i.e. when the number of events measured is greater than the number of available hardware counters). There is no concept of setting up frequencies in perf stat since all it does is a direct count of all the events that you intend to measure.
This is the proof from the linux kernel source code :-
You can see it sets up sample period (the inverse of sample freq) to be 0 - so you know what sample freq is ;)
Anyway, what you can do is a verbose reading of perf stat using perf stat -v to see and understand what is happening with all of the events that you are measuring.
To understand more about perf stat, you can also read this answer.

neo4j not creating index on large dataset

I am creating an index on a very large (8.2M node, 63M property) neo4j db instance.
CREATE INDEX ON :Article(lowerTitle)
It takes a negligible amount of time to issue the command, and the index (presumably) begins to process.
I have a max java heap of 100GB, and 40 cores (it's a large server). It is, however stupidly, a HDD.
Right after issuing the index command, my core usage spikes up to very efficient usage. After about 20 seconds, it drops to using almost no processor power, but about 90% of MEM.
I have left it running for 3 hours, and the index is still not created (or at least, there are no improvements for simple MATCH queries on single parameters, which average out at about 16 seconds).
MATCH (arti {lowerTitle: "quantum mechanics"}) RETURN arti
Is this reasonable? What is taking so long? Am I doing something wrong?
NOTE: I have also noticed that my total database size (38.02GB) has not increased over the 3 hours
For verifying that your index is online, issue the :schema command in the browser.
You should see your index status.
ONLINE means OK
POPULATING means it is still populating the indices
FAILED means, well, failed
Your query will never run fast, because you are not using a label, so no indices will be used, change it to :
MATCH (arti:Article {lowerTitle: "quantum mechanics"}) RETURN arti

sidekiq-pro batches don't appear to release redis memory after batches complete

We are using sidekiq pro 1.7.3 and sidekiq 3.1.4, Ruby 2.0, Rails 4.0.5 on heroku with the redis green addon with 1.75G of memory.
We run a lot of sidekiq batch jobs, probably around 2 million jobs a day. What we've noticed is that the redis memory steadily increases over the course of a week. I would have expected that when the queues are empty and no workers are busy that redis would have low memory usage, but it appears to stay high. I'm forced to do a flushdb pretty much every week or so because we approach our redis memory limit.
I've had a series of correspondence with Redisgreen and they suggested I reach out to the sidekiq community. Here are some stats from redisgreen:
Here's a quick summary of RAM use across your database:
The vast majority of keys in your database are simple values taking up 2 bytes each.
200MB is being consumed by "queue:low", the contents of your low-priority sidekiq queue.
The next largest key is "dead", which occupies about 14MB.
And:
We just ran an analysis of your database - here is a summary of what we found in 23129 keys:
18448 strings with 1048468 bytes (79.76% of keys, avg size 56.83)
6 lists with 41642 items (00.03% of keys, avg size 6940.33)
4660 sets with 3325721 members (20.15% of keys, avg size 713.67)
8 hashs with 58 fields (00.03% of keys, avg size 7.25)
7 zsets with 1459 members (00.03% of keys, avg size 208.43)
It appears that you have quite a lot of memory occupied by sets. For example - each of these sets have more than 10,000 members and occupies nearly 300KB:
b-3819647d4385b54b-jids
b-3b68a011a2bc55bf-jids
b-5eaa0cd3a4e13d99-jids
b-78604305f73e44ba-jids
b-e823c15161b02bde-jids
These look like Sidekiq Pro "batches". It seems like some of your batches are getting filled up with very large numbers of jobs, which is causing the additional memory usage that we've been seeing.
Let me know if that sounds like it might be the issue.
Don't be afraid to open a Sidekiq issue or email prosupport # sidekiq.org directly.
Sidekiq Pro Batches have a default expiration of 3 days. If you set the Batch's expires_in setting longer, the data will sit in Redis longer. Unlike jobs, batches do not disappear from Redis once they are complete. They need to expire over time. This means you need enough memory in Redis to hold N days of Batches, usually not a problem for most people, but if you have a busy Sidekiq installation and are creating lots of batches, you might notice elevated memory usage.

Resources