How do I measure execution time of command/script in xonsh? In bash I can do it with time eg:
> time sleep 1
real 0m1,002s
user 0m0,001s
sys 0m0,001s
How to do it in xonsh?
There's no shell builtin time in xonsh. I use time(1), which is usually installed in most linux distros, but should be available in your package manager.
🐚 time sleep 1
0.00user 0.00system 0:01.00elapsed 0%CPU (0avgtext+0avgdata 2048maxresident)k
0inputs+0outputs (0major+100minor)pagefaults 0swaps
or if you want it to more closely match the output of the bash builtin
🐚 time -p sleep 1
real 1.00
user 0.00
sys 0.00
or you can roll your own (for wall time)
🐚 import time
🐚 from contextlib import contextmanager
🐚 #contextmanager
°º def _time():
°º tic = time.time()
°º yield
°º print(time.time() - tic)
°º
🐚 with _time():
°º sleep 1
°º
1.013087511062622
Related
I recently set up healthchecks in my docker-compose config.
It is doing great and I like it. Here's a typical example:
services:
app:
healthcheck:
test: curl -sS http://127.0.0.1:4000 || exit 1
interval: 5s
timeout: 3s
retries: 3
start_period: 30s
My container is quite slow to boot, hence I set up a 30 seconds start_period.
But it doesn't really fit my expectation: I don't need check every 5 seconds, but I need to know when the container is ready for the first time as soon as possible for my orchestration, and since my start_period is approximative, if it is not ready yet at first check, I have to wait for interval before retry.
What I'd like to have is:
While container is not healthy, retry every 5 seconds
Once it is healthy, check every 1 minute
Ain't there a way to achieve this out-of-the-box with docker-compose?
I could write a custom script to achieve this, but I'd rather have a native solution if it is possible.
Unfortunately, this is not possible out of the box.
All the duration set are final. They can't be changed depending on the container state.
However, according to the documentation, the probe does not seem to wait for the start_period to finish before checking your test. The only thing it does is that any failure hapenning during start_period will not be considered as an error.
Below is the sentence that make me think that :
start_period provides initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries.
I encourage you to test if this is really the case as I've never really paid any attention if the healthcheck is tested during the start period or not.
And if it is the case, you can probably increase your start_period if you're unsure about the duration and also increase the interval in order to find a good compromise.
I wrote a script that does this, though I'd rather find a native solution:
#!/bin/sh
HEALTHCHECK_FILE="/root/.healthchecked"
COMMAND=${*?"Usage: healthcheck_retry <COMMAND>"}
if [ -r "$HEALTHCHECK_FILE" ]; then
LAST_HEALTHCHECK=$(date -r "$HEALTHCHECK_FILE" +%s)
# FIVE_MINUTES_AGO=$(date -d 'now - 5 minutes' +%s)
FIVE_MINUTES_AGO=$(echo "$(( $(date +%s)-5*60 ))")
echo "Healthcheck file present";
# if (( $LAST_HEALTHCHECK > $FIVE_MINUTES_AGO )); then
if [ $LAST_HEALTHCHECK -gt $FIVE_MINUTES_AGO ]; then
echo "Healthcheck too recent";
exit 0;
fi
fi
if $COMMAND ; then
echo "\"$COMMAND\" succeed: updating file";
touch $HEALTHCHECK_FILE;
exit 0;
else
echo "\"$COMMAND\" failed: exiting";
exit 1;
fi
Which I use: test: /healthcheck_retry.sh curl -fsS localhost:4000/healthcheck
The pain is that I need to make sure the script is available in every container, so I have to create an extra volume for this:
image: postgres:11.6-alpine
volumes:
- ./scripts/utils/healthcheck_retry.sh:/healthcheck_retry.sh
I am confused about the difference between perf record and perf stat when it comes to counting events like page-faults, cache-misses and anything else from perf list. I have 2 questions below the answer to "Question 1" might also help answer "Question 2" but I wrote them out explicitly in the case that it doesn't.
Question 1:
It is my understanding that perf stat gets a "summary" of counts but when used with the -I option gets the counts at the specified millisecond interval. With this option does it sum up the counts over the interval or get the average over the interval, or something else entirely? I assume it is summed up. The perf wiki states it is aggregated but I guess that could mean either.
Question 2:
Why doesn't perf stat -e <event1> -I 1000 sleep 5 give about the same counts as if I summed up the counts over each second for the following command perf record -e <event1> -F 1000 sleep 5?
For example if I use "page-faults" as the event for event1 I get the following outputs I have listed below under each command. (I am assuming the period field is the counts for the event in perf record's perf.data file)
PERF STAT
perf stat -e page-faults -I 1000 sleep 5
# time counts unit events
1.000252928 54 page-faults
2.000498389 <not counted> page-faults
3.000569957 <not counted> page-faults
4.000659987 <not counted> page-faults
5.000837864 2 page-faults
PERF RECORD
perf record -e page-faults -F 1000 sleep 5
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.016 MB perf.data (6 samples) ]
perf script -F period
1
1
1
5
38
164
I expected that if I summed up the counts from perf stat I would get the same as the sum from perf record. If I use the -c option with perf record and give an argument of 1 I do get a close match. Is this just a coincidence because of the relatively low number of page faults?
References I have used so far:
brendangregg's perf blog
The perf record and stat links on this page mentioned above as "perf wiki"
I dug around here to see how and when perf record actually records vs when it writes to perf.data.
Thanks in advance for any and all insight you can provide.
First of all, your test case of using sleep and page-faults is not an ideal test case. There should be no page fault events during the sleep duration, you you can't really expect anything interesting. For the sake of easier reasoning I suggest to use the ref-cycles (hardware) event and a busy workload such as awk 'BEGIN { while(1){} }'.
Question 1: It is my understanding that perf stat gets a "summary" of
counts but when used with the -I option gets the counts at the
specified millisecond interval. With this option does it sum up the
counts over the interval or get the average over the interval, or
something else entirely? I assume it is summed up.
Yes. The values are just summed up. You can confirm that by testing:
$ perf stat -e ref-cycles -I 1000 timeout 10s awk 'BEGIN { while(1){} }'
# time counts unit events
1.000105072 2,563,666,664 ref-cycles
2.000267991 2,577,462,550 ref-cycles
3.000415395 2,577,211,936 ref-cycles
4.000543311 2,577,240,458 ref-cycles
5.000702131 2,577,525,002 ref-cycles
6.000857663 2,577,156,088 ref-cycles
[ ... snip ... ]
[ Note that it may not be as nicely consistent on all systems due dynamic frequency scaling ]
$ perf stat -e ref-cycles -I 3000 timeout 10s awk 'BEGIN { while(1){} }'
# time counts unit events
3.000107921 7,736,108,718 ref-cycles
6.000265186 7,732,065,900 ref-cycles
9.000372029 7,728,302,192 ref-cycles
Question 2: Why doesn't perf stat -e <event1> -I 1000 sleep 5 give
about the same counts as if I summed up the counts over each second
for the following command perf record -e <event1> -F 1000 sleep 5?
perf stat -I is in milliseconds, whereas perf record -F is in HZ (1/s), so the corresponding command to perf stat -I 1000 is perf record -F 1. In fact with our more stable event/workload, this looks better:
$ perf stat -e ref-cycles -I 1000 timeout 10s awk 'BEGIN { while(1){} }'
# time counts unit events
1.000089518 2,578,694,534 ref-cycles
2.000203872 2,579,866,250 ref-cycles
3.000294300 2,579,857,852 ref-cycles
4.000390273 2,579,964,842 ref-cycles
5.000488375 2,577,955,536 ref-cycles
6.000587028 2,577,176,316 ref-cycles
7.000688250 2,577,334,786 ref-cycles
8.000785388 2,577,581,500 ref-cycles
9.000876466 2,577,511,326 ref-cycles
10.000977965 2,577,344,692 ref-cycles
10.001195845 466,674 ref-cycles
$ perf record -e ref-cycles -F 1 timeout 10s awk 'BEGIN { while(1){} }'
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.008 MB perf.data (17 samples) ]
$ perf script -F time,period
3369070.273722: 1
3369070.273755: 1
3369070.273911: 3757
3369070.273916: 3015133
3369070.274486: 1
3369070.274556: 1
3369070.274657: 1778
3369070.274662: 2196921
3369070.275523: 47192985748
3369072.663696: 2578692405
3369073.663547: 2579122382
3369074.663609: 2580015300
3369075.664085: 2579873741
3369076.664433: 2578638211
3369077.664379: 2578378119
3369078.664175: 2578166440
3369079.663896: 2579238122
So you see, eventually the results are stable also for perf record -F. Unfortunately the documentation of perf record is very poor. You can learn what the settings -c and -F mean by looking at the documentation of the underlying system call man perf_event_open:
sample_period, sample_freq A "sampling" event is one that
generates an overflow notification every N events, where N is given by
sample_period. A sampling event has sample_period > 0. When
an overflow occurs, requested data is recorded in the mmap buffer.
The sample_type field controls what data is recorded on each
overflow.
sample_freq can be used if you wish to use frequency rather than
period. In this case, you set the freq flag. The kernel will
adjust the sampling period to try and achieve the desired rate. The
rate of adjustment is a timer tick.
So while perf stat uses an internal timer to read the value of the counter every -i milliseconds, perf record sets an event overflow counter to take a sample every -c events. That means it takes a sample every N events (e.g. every N page-fault or cycles). With -F, it it tries to regulate this overflow value to achieve the desired frequency. It tries different values and tunes it up/down accordingly. This eventually works for counters with a stable rate, but will get erratic results for dynamic events.
I am following this link and trying to implement the scenarios there.
So I need to generate a data for MANET nodes representing their location in this format:
Current time - latest x – latest y – latest update time – previous x –previous y – previous update time
with the use of setdest tool with these options:
1500 by 300 grid, ran for 300 seconds and used pause times of 20s and maximum velocities of 2.5 m/s.
so I come up with this command
./setdest -v 2 -n 10 -s 2.5 -m 10 -M 50 -t 300 -p 20 -x 1500 -y 300 > test1.tcl
which worked and generated a tcl file, but I don't know how can I obtain the data in the required format.
setdest -v 2 -n 10 -s 2.5 -m 10 -M 50 -t 300 -p 20 -x 1500 -y 300 > test1.tcl
Not a tcl file : Is a "scen" / scenario file with 1,700 "ns" commands. Your file was renamed to "test1.scen", and is now used in the manet examples, in the simulation example aodv-manet-20.tcl :
set val(cp) "test1.scen" ;#Connection Pattern
Please be aware that time settings are maximum time. "Long time settings" were useful ~20 years ago when computers were slow. (Though there are complex simulations lasting half an hour to one hour.)
Link, manet-examples-1.tar.gz https://drive.google.com/file/d/0B7S255p3kFXNR05CclpEdVdvQm8/view?usp=sharing
Edit: New example added → manet0-16-nam.tcl → → https://drive.google.com/file/d/0B7S255p3kFXNR0ZuQ1l6YnlWRGc/view?usp=sharing
Hi I am trying to create samples using opencv_createsamples.exe but it crashes every time. I've checked different builds, opencv2 and opencv3. I don't know how to overcome this. My command looks like this
opencv_createsamples.exe -img C:/Users/dpach/Pictures/Interfejsy/img/1.jpg -maxxangle 15 -maxyangle 15 -maxzangle 1 -w 80 -h 40 -vec test.vec -bgtresh 0 -bgcolor 0 -show
The windows to show samples is opening but after that I receive info that program is not responding. Any ideas ?
// EDIT
I've tried to start it from a pseudo unix bash and I receive then Segmentation fault
// EDIT2
It crush after that Create training samples from single image applying distortions...
I submit my code to a spark stand alone cluster. Submit command is like below:
nohup ./bin/spark-submit \
--master spark://ES01:7077 \
--executor-memory 4G \
--num-executors 1 \
--total-executor-cores 1 \
--conf "spark.storage.memoryFraction=0.2" \
./myCode.py 1>a.log 2>b.log &
I specify the executor use 4G memory in above command. But use the top command to monitor the executor process, I notice the memory usage keeps growing. Now the top Command output is below:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12578 root 20 0 20.223g 5.790g 23856 S 61.5 37.3 20:49.36 java
My total memory is 16G so 37.3% is already bigger than the 4GB I specified. And it is still growing.
Use the ps command , you can know it is the executor process.
[root#ES01 ~]# ps -awx | grep spark | grep java
10409 ? Sl 1:43 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4G -Xmx4G -XX:MaxPermSize=256m org.apache.spark.deploy.master.Master --ip ES01 --port 7077 --webui-port 8080
10603 ? Sl 6:16 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4G -Xmx4G -XX:MaxPermSize=256m org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://ES01:7077
12420 ? Sl 10:16 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms1g -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master spark://ES01:7077 --conf spark.storage.memoryFraction=0.2 --executor-memory 4G --num-executors 1 --total-executor-cores 1 /opt/flowSpark/sparkStream/ForAsk01.py
12578 ? Sl 21:03 java -cp /opt/spark-1.6.0-bin-hadoop2.6/conf/:/opt/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.6.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/hadoop-2.6.2/etc/hadoop/ -Xms4096M -Xmx4096M -Dspark.driver.port=52931 -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler#10.79.148.184:52931 --executor-id 0 --hostname 10.79.148.184 --cores 1 --app-id app-20160511080701-0013 --worker-url spark://Worker#10.79.148.184:52660
Below are the code. It is very simple so I do not think there is memory leak
if __name__ == "__main__":
dataDirectory = '/stream/raw'
sc = SparkContext(appName="Netflow")
ssc = StreamingContext(sc, 20)
# Read CSV File
lines = ssc.textFileStream(dataDirectory)
lines.foreachRDD(process)
ssc.start()
ssc.awaitTermination()
The code for process function is below. Please note that I am using HiveContext not SqlContext here. Because SqlContext do not support window function
def getSqlContextInstance(sparkContext):
if ('sqlContextSingletonInstance' not in globals()):
globals()['sqlContextSingletonInstance'] = HiveContext(sparkContext)
return globals()['sqlContextSingletonInstance']
def process(time, rdd):
if rdd.isEmpty():
return sc.emptyRDD()
sqlContext = getSqlContextInstance(rdd.context)
# Convert CSV File to Dataframe
parts = rdd.map(lambda l: l.split(","))
rowRdd = parts.map(lambda p: Row(router=p[0], interface=int(p[1]), flow_direction=p[9], bits=int(p[11])))
dataframe = sqlContext.createDataFrame(rowRdd)
# Get the top 2 interface of each router
dataframe = dataframe.groupBy(['router','interface']).agg(func.sum('bits').alias('bits'))
windowSpec = Window.partitionBy(dataframe['router']).orderBy(dataframe['bits'].desc())
rank = func.dense_rank().over(windowSpec)
ret = dataframe.select(dataframe['router'],dataframe['interface'],dataframe['bits'], rank.alias('rank')).filter("rank<=2")
ret.show()
dataframe.show()
Actually I found below code will cause the problem:
# Get the top 2 interface of each router
dataframe = dataframe.groupBy(['router','interface']).agg(func.sum('bits').alias('bits'))
windowSpec = Window.partitionBy(dataframe['router']).orderBy(dataframe['bits'].desc())
rank = func.dense_rank().over(windowSpec)
ret = dataframe.select(dataframe['router'],dataframe['interface'],dataframe['bits'], rank.alias('rank')).filter("rank<=2")
ret.show()
Because If I remove these 5 line. The code can run all night without showing memory increase. But adding them will cause the memory usage of executor grow to a very high number.
Basically the above code is just some window + grouby in SparkSQL. So is this a bug?
Disclaimer: this answer isn't based on debugging, but more on observations and the documentation Apache Spark provides
I don't believe that this is a bug to begin with!
Looking at your configurations, we can see that you are focusing mostly on the executor tuning, which isn't wrong, but you are forgetting the driver part of the equation.
Looking at the spark cluster overview from Apache Spark documentaion
As you can see, each worker has an executor, however, in your case, the worker node is the same as the driver node! Which frankly is the case when you run locally or on a standalone cluster in a single node.
Further, the driver takes 1G of memory by default unless tuned using spark.driver.memory flag. Furthermore, you should not forget about the heap usage from the JVM itself, and the Web UI that's been taken care of by the driver too AFAIK!
When you delete the lines of code you mentioned, your code is left without actions as map function is just a transformation, hence, there will be no execution, and therefore, you don't see memory increase at all!
Same applies on groupBy as it is just a transformation that will not be executed unless an action is being called which in your case is agg and show further down the stream!
That said, try to minimize your driver memory and the overall number of cores in spark which is defined by spark.cores.max if you want to control the number of cores on this process, then cascade down to the executors. Moreover, I would add spark.python.profile.dump to your list of configuration so you can see a profile for your spark job execution, which can help you more with understanding the case, and to tune your cluster more to your needs.
As I can see in your 5 lines, maybe the groupBy is the issue , would you try with reduceBy, and see how it performs.
See here and here.