When I run httperf with following options, the output is easy to understand.
Options: Make total 10 connections (num-conns) at rate of 10 (rate) connections/second with 2 request calls per connection (num-calls).
Output: 10 connections with 20 request calls
httperf -v --server www.example.com --wlog=n,$HOME/tmp/reqs.txt_httperf --rate=10 --num-conns=10 --num-calls=2 --hog
Total: connections 10 requests 20 replies 10 test-duration 1.575 s
However, when I use following options, httperf output, output is confusing.
Options: Make total 4 connections (num-conns) at rate of 10 (rate) connections/second with 6 request calls per connection (num-calls).
httperf -v --server www.example.com --wlog=n,$HOME/tmp/reqs.txt_httperf --rate=10 --num-conns=4 --num-calls=6 --hog
Total: connections 4 requests 8 replies 4 test-duration 0.455 s
It seems like when num-calls is greater than num-conns, number of requests made are 2*num-conns.
I am not following why num-calls be greater than num-conns. Am I missing anything?
The reason why num-calls is greater than num-conns: on each connection, you can make multiple HTTP transactions (a.k.a "calls"). If num-conns = 4, on each connection, you make 2 transactions, then num-calls would be 8.
Hope this helps.
Related
Whatever steps I take, the first example of any query I make in neo4j always takes longer than any subsequent execution of the same query. So I guess something other than the store is being cached.
I'm using the latest community container image for 3.5 (3.5.20 at the time of writing)
I have plenty of memory to cache absolutely everything if I want to
I'm using well documented warm-up strategies in order to (allegedly) prime the page cache
The database details...
I run CALL apoc.monitor.store(); and it tells me the size of each store: -
+------------------------------------------------------------------------------------------------------------+
| logSize | stringStoreSize | arrayStoreSize | relStoreSize | propStoreSize | totalStoreSize | nodeStoreSize |
+------------------------------------------------------------------------------------------------------------+
| 1224 | 148165607424 | 3016515584 | 26391839040 | 42701007672 | 241318778285 | 2430128610 |
+------------------------------------------------------------------------------------------------------------+
I run CALL apoc.warmup.run(true, true, true); (before running any queries). It takes about 15 minutes and displays a summary of what it's done. The text it outputs is not easily parsed in its raw form so I've summarised salient parts of it below. Basically it tells me the number of pages loaded for each store, and these are: -
nodePages 296,719
relPages 3,234,294
relGroupPages 4,580
propPages 5,233,608
stringPropPages 18,086,620
arrayPropPages 368,225
indexPages 2,235,227
---
Total 29,459,273
With a page size of 8,192 bytes per page that's approximately 225GiB of pages for the displayed stores
I have enough physical memory and I have already set NEO4J_dbms_memory_pagecache_size to 250G
I set NEO4J_dbms_memory_heap_initial__size and NEO4J_dbms_memory_heap_max__size to 8G
So (allegedly) the page cache is "warm" and I have enough physical memory.
Query timings...
I run my query, which returns 1,813 records, and I execute the same query several times in order to illustrate the issue. I see the following (typical) timings: -
1. 1,821 mS
2. 75 mS
3. 60 mS
4. 51 mS
5. 48 mS
6. 42 mS
7. 38 mS
8. 36 mS
9. 36 mS
The actual values are dependent on the query but the first execution of every query is always significantly longer than the second.
ADDENDUM (16/Jul).
Just to be clear, using apoc.warmup.run does help.
If I don't use it, the first query is much longer still.
Having just restarted the DB (without a warm-up) the first query
took 7,829mS. The 2nd was 116mS, the third 66mS
So, warm-up or not, the first query is always longer.
Question...
What's going on?
Can I do anything more to reduce the initial query time?
Oh, and using the query as the warm up is not the answer - I don't know what queries will be used
Not sure why apoc.warmup.run does not speed up your initial query, but you could just try using an initial query invocation as your "warmup" instead.
I am running a few micro service instances that are functioning as edge routers and have the #EnableZuulProxy annotation. I have written a number of filters and these control the flow of requests into the system.
What I would like to do is get the circuit stats from what is going on under the covers. I see that there is a underlying netflix class DynamicServerListLoadBalancer that has some of the sts I would like to see. Is it possible to get an instance of it and at specific time get the stats form it>
I can see it has stuff like this: (I formatted a log statement that I saw in my logs)
c.n.l.DynamicServerListLoadBalancer : DynamicServerListLoadBalancer for client authserver initialized:
DynamicServerListLoadBalancer:{
NFLoadBalancer:
name=authserver,current
list of Servers=[127.0.0.1:9999],
Load balancer stats=
Zone stats: {
defaultzone=[
Zone:defaultzone;
Instance count:1;
Active connections count: 0;
Circuit breaker tripped count: 0;
Active connections per server: 0.0;]
},
Server stats:
[[
Server:127.0.0.1:9999;
Zone:defaultZone;
Total Requests:0;
Successive connection failure:0;
Total blackout seconds:0;
Last connection made:Wed Dec 31 19:00:00 EST 1969;
First connection made: Wed Dec 31 19:00:00 EST 1969;
Active Connections:0;
total failure count in last (1000) msecs:0;
average resp time:0.0; 9
0 percentile resp time:0.0;
95 percentile resp time:0.0;
min resp time:0.0;
max resp time:0.0;
stddev resp time:0.0
]]
}
ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#5b1b78aa
All of this would be valuable to get and act on. Mostly the acting would be to feed usage heuristics back to the system.
Ok, like most of these things, I figured it out myself.
So here you go.
HystrixCommandKey hystrixCommandKey = HystrixCommandKey.Factory.asKey("what you are looking for");
HystrixCommandMetrics hystrixCommandMetrics = HystrixCommandMetrics.getInstance(hystrixCommandKey);
HystrixCommandProperties properties = hystrixCommandMetrics.getProperties();
long maxConnections = properties.executionIsolationSemaphoreMaxConcurrentRequests().get().longValue();
boolean circuitOpen = properties.circuitBreakerForceOpen().get().booleanValue();
int currentConnections = hystrixCommandMetrics.getCurrentConcurrentExecutionCount();
So in this example, "what you are looking for" is the hysteria command that you are looking for.
this gets you the properties of the particular hysteria thing you are looking for.
Form this you pull out the max connections, the current connections and whether the circuit was open.
So there you are.
tracert returns requested time out. What I understand from this is the packets lost some where on the network.
Does it mean the issue is with the ISP or with the hosting provider or my windows system?
10 * * * Request timed out.
11 * * * Request timed out.
12 * * * Request timed out.
13 * * * Request timed out.
14 * * * Request timed out.
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
19 * * * Request timed out.
20 * * * Request timed out.
21 * * * Request timed out.
22 * * * Request timed out.
23 * * * Request timed out.
24 * * * Request timed out.
25 * * * Request timed out.
26 * * * Request timed out.
27 * * * Request timed out.
28 * * * Request timed out.
29 * * * Request timed out.
30 * * * Request timed out.
The first 9 were successful.
I can't see the first 9 hops but if they are all the same then you may have a firewall configuration issue that prevents the packets from either getting out or getting back.
Try again turning off your firewall (temporarily!). The other option is that your ISP may drop ICMP traffic as a matter of course, or only when they are busy with other traffic.
ICMP (the protocol used by traceroute) is of the lowest priority, and when higher priority traffic is ongoing the router may be configured to simply drop ICMP packets. There is also the possibility that the ISP drops all ICMP packets as a matter of security since many DOS (Denial of Service) attacks are based on probing done with ICMP packets.
Some routers view all pings as a Port-Scan, and block for that reason. (as the first step in any attack is determining which ports are open.) However, blocking ping packets / tracert packets, etc. is only partially effective at mitigating a Denial-of-service attack, as such an attack could use ANY PROTOCHOL it wanted (such as by using TCP or UDP packets, etc.) So long as there is an open port to receive the packet on the machine targeted for Denial-Of-Service. For example, if we wanted to target an http server, we only need use an intercepting proxy to repeatedly send a null TCP packet to the server on port 80 or port 8080, since we know that these are the two most common ports for http. Likewise, if the target machine is running an IRCd, we know the port is most likely 6667 (unless the server is using SSL), which would be the most common port for that kind of service. Therefore, dropping ping packets does not prevent a DdOS attack- it just makes that type of attack a bit more difficult.
This is what I found from the Wireshark documentation(I had the same problem):
"The tracert program provided with Windows does not allow one
to change the size of the ICMP message sent by tracert. So it won’t be
possible to use a Windows machine to generate ICMP messages that are large
enough to force IP fragmentation. However, you can use tracert to generate
small,fixed-length packets"
https://danielgraham.files.wordpress.com/2021/09/wireshark_ip_v8.1-2.pdf
use tracert -h 1
this will limit the number of times it tries a particular ip address to 1 try. h = hops. I had written a batch script a while back to scan my entire network to get a list of ips and computer networks and it would waste time on the fire wall that wouldnt answer and ip addresses that weren't assigned to any computers. Wicked annoying!!!!!! so I added the -h 1 to the script!! I runs through and makes a list in a text file. I hope to improve it in the future by running arp -a first to get a quck list of ips, then feeding that list into a script similar to this one. that way it doesn't waste time on unassigned IP's.
enter code here#echo off
enter code hereset trace=tracert
enter code hereset /a byte1=222
enter code hereset /a byte2=222
enter code hereset /a byte3=222
enter code hereset /a byte4=100
enter code hereset loop=0
enter code here:loop
enter code here#echo
enter code here%trace% %byte1%.%byte2%.%byte3%.%byte4%>>ips.txt
enter code hereset /a loop=%loop% + 1
enter code hereset /a byte4=%byte4% + 1
enter code here#echo %byte4%
enter code hereif %loop%==255 goto next
enter code heregoto loop
enter code here:next
enter code hereend
Your antivirus blocks the incoming packets , and in no case this option can be turned off because its the basic property of an antivirus i.e to block packets to prevent computer from normal as well as DOS (Denial of Service) attacks .
I am interested in bench-marking different parts of my program for speed. I having tried using info(statistics) and erlang:now()
I need to know down to the microsecond what the average speed is. I don't know why I am having trouble with a script I wrote.
It should be able to start anywhere and end anywhere. I ran into a problem when I tried starting it on a process that may be running up to four times in parallel.
Is there anyone who already has a solution to this issue?
EDIT:
Willing to give a bounty if someone can provide a script to do it. It needs to spawn though multiple process'. I cannot accept a function like timer.. at least in the implementations I have seen. IT only traverses one process and even then some major editing is necessary for a full test of a full program. Hope I made it clear enough.
Here's how to use eprof, likely the easiest solution for you:
First you need to start it, like most applications out there:
23> eprof:start().
{ok,<0.95.0>}
Eprof supports two profiling mode. You can call it and ask to profile a certain function, but we can't use that because other processes will mess everything up. We need to manually start it profiling and tell it when to stop (this is why you won't have an easy script, by the way).
24> eprof:start_profiling([self()]).
profiling
This tells eprof to profile everything that will be run and spawned from the shell. New processes will be included here. I will run some arbitrary multiprocessing function I have, which spawns about 4 processes communicating with each other for a few seconds:
25> trade_calls:main_ab().
Spawned Carl: <0.99.0>
Spawned Jim: <0.101.0>
<0.100.0>
Jim: asking user <0.99.0> for a trade
Carl: <0.101.0> asked for a trade negotiation
Carl: accepting negotiation
Jim: starting negotiation
... <snip> ...
We can now tell eprof to stop profiling once the function is done running.
26> eprof:stop_profiling().
profiling_stopped
And we want the logs. Eprof will print them to screen by default. You can ask it to also log to a file with eprof:log(File). Then you can tell it to analyze the results. We tell it to collapse the run time from all processes into a single table with the option total (see the manual for more options):
27> eprof:analyze(total).
FUNCTION CALLS % TIME [uS / CALLS]
-------- ----- --- ---- [----------]
io:o_request/3 46 0.00 0 [ 0.00]
io:columns/0 2 0.00 0 [ 0.00]
io:columns/1 2 0.00 0 [ 0.00]
io:format/1 4 0.00 0 [ 0.00]
io:format/2 46 0.00 0 [ 0.00]
io:request/2 48 0.00 0 [ 0.00]
...
erlang:atom_to_list/1 5 0.00 0 [ 0.00]
io:format/3 46 16.67 1000 [ 21.74]
erl_eval:bindings/1 4 16.67 1000 [ 250.00]
dict:store_bkt_val/3 400 16.67 1000 [ 2.50]
dict:store/3 114 50.00 3000 [ 26.32]
And you can see that most of the time (50%) is spent in dict:store/3. 16.67% is taken in outputting the result, another 16.67% is taken by erl_eval (this is why you get by running short functions in the shell -- parsing them becomes longer than running them).
You can then start going from there. That's the basics of profiling run times with Erlang. Handle with care, eprof can be quite a load on a production system or for functions that run for too long. Especially on a production system.
You can use eprof or fprof.
The normal way to do this is with timer:tc. Here is a good explanation.
I can recommend you this tool: https://github.com/virtan/eep
You will get something like this https://raw.github.com/virtan/eep/master/doc/sshot1.png as a result.
Step by step instruction for profiling all processes on running system:
On target system:
1> eep:start_file_tracing("file_name"), timer:sleep(20000), eep:stop_tracing().
$ scp -C $PWD/file_name.trace desktop:
On desktop:
1> eep:convert_tracing("file_name").
$ kcachegrind callgrind.out.file_name
I have a Apache + Haproxy + Mongrel Cluster setup. I want to receive alerts whenever my Mongrel queue length gets too high.
How to I get the current Mongrel Queue length and make it available for alerting tools such as Monit and Nagios?
I know that Haproxy has the information about Mongrel queue as it intelligently sends requests to least busy Mongrel in the cluster. I wonder how it finds out? I need a similar mechanism to generate alerts and/or restart mongrels when such a condition arrives.
Add this to your haproxy config
stats uri /haproxy/hastats
Then use lynx to get the stats like this:
(assuming haproxy runs on port 10000 - adjust to suit)
lynx --dump http://my-server:10000/haproxy/hastats
Each there will be a line for each of your server entries in the haproxy config file, telling you whether it's up or down, how long it's queue is, like this:
Server Queue Sessions Errors
Name Weight Status Act. Bck. Curr. Max. Curr. Max. Limit Cumul. Conn. Resp. Sec. Check Down
primary 1 UP Y - 0 0 68 386 - 134385861 207 699 0 7028 150
secondary 1 UP Y - 0 0 71 248 - 134464984 216 551 0 7129 98
Now all you need is a script to get the current queue (column 6) and feed it into nagios, and you're away!
New Relic's RPM product (www.newrelic.com) maintains information on Mongrel queue length. They have an API that you may be able to use to get near real-time feedback on queue length and adjust load balancing accordingly.
You can get more information on the API at: https://newrelic.tenderapp.com/faqs/docs/data-api
Hopefully that provides some help.