I'm using the Percona Query Playback tool and I want to run multiple clients at once
This is the sample command
/usr/local/bin/percona-playback --queue-depth 99999 --mysql-max-retries 0 --mysql-host somehost.xxx.com --mysql-username xxx --mysql-password xxxx --mysql-schema xxx --query-log-file some_slow_log.log
I want to be able to run that 30x concurrently. What tool/framework/library should I look at?
If you are running on the terminal. Run the for loop.
for run in {1..30}
do
command &
done
& to run the process in the background, so you can continue to use the shell and do not have to wait until the script is finished
for run in {1..30}
do
/usr/local/bin/percona-playback --queue-depth 99999 --mysql-max-retries 0 --mysql-host somehost.xxx.com --mysql-username xxx --mysql-password xxxx --mysql-schema xxx --query-log-file some_slow_log.log &
done
Related
I am using vsql.exe on an external Vertica database for which I don't have any administrative access. I use some views with simple SELECT+FROM+WHERE queries.
These queries 90% of the time work just fine, but some times, randomly, I get this error:
ERROR 3326: Execution time exceeded run time cap of 00:00:45
The strange thing is that this error can happen way after those 45 seconds, even after 3 minutes. I've been told this is related to having different resource pools, but anyway I don't want to dig into that.
The problem is that when this occurs, vsql.exe returns errorlevel 0 and there is (apparently almost) no way to know this failed.
The output of the query is stored in a csv file. When it succeeds, it ends with (#### rows). But when it fails with this error, it just stops at any point of the csv, and its resulting size is around half of what's expected. This is of course not what you would expect when an error occurs, like no output or an empty one.
If there is a connection error or if the query has syntax errors, the errorlevel is not 0, so in those cases it behaves as expected.
I've tried many things, like increasing the timeout or adding -v ON_ERROR_STOP=ON to the vsql.exe parameters, but none of that helped.
I've googled a lot and found many people having this error, but the solutions are mostly related to increasing the timeouts, not related to the errorlevel returned.
Any help will be greatly appreciated.
TL;DR: how can I detect an error 3326 in a batch file like this?
#echo off
vsql.exe -h <hostname> -U <user> -w <pwd> -o output.cs -Ac "SELECT ....;"
echo %errorlevel% is always 0
if errorlevel 1 echo Error!! But this is never displayed.
Now that's really unexpected to me. I don't have Windows available just now, but trying on my Mac - at first just triggering a deliberate error:
$ vsql -h zbook -d sbx -U dbadmin -w $VSQL_PASSWORD -v ON_ERROR_STOP=ON -Ac "select * from foobarfoo"
ERROR 4566: Relation "foobarfoo" does not exist
$ echo $?
1
With ON_ERROR_STOP set to ON, this should be the behaviour everywhere.
Could you try what I did above through Windows, just with echo %ERRORLEVEL% instead of echo $?, just from the Windows command prompt and not in a batch file?
Next test: I run on resource pool general in my little test database, so I temporarily modify it to a runtime cap of 30 sec, run a silly query that will take over 30 seconds with ON_ERROR_STOP set to ON, collect the value returned by vsql and set the runtime cap of general back to NONE. I also have the %VSQL_* % env variables set so I don't have to repeat them all the time:
rem Windows way to set environment variables for vsql:
set VSQL_HOST=zbook
set VSQL_DATABASE=sbx
set VSQL_USER=dbadmin
set VSQL_PASSWORD=***masked***
Now for the test (backslashes, in Linux/MacOs escape a new line, which enables you to "word wrap" a shell command. Use the caret (^) in Windows for that):
marco ~/1/Vertica/supp $ # set a runtime cap
marco ~/1/Vertica/supp $ vsql -i -c \
"alter resource pool general runtimecap '00:00:30'"
ALTER RESOURCE POOL
Time: First fetch (0 rows): 116.326 ms. All rows formatted: 116.730 ms
marco ~/1/Vertica/supp $ vsql -v ON_ERROR_STOP=ON -iAc \
"select count(*) from one_million_rows a cross join one_million_rows b"
ERROR 3326: Execution time exceeded run time cap of 00:00:30
marco ~/1/Vertica/supp $ # test the return code
marco ~/1/Vertica/supp $ echo $?
1
marco ~/1/Vertica/supp $ # clear the runtime cap
marco ~/1/Vertica/supp $ vsql -i -c \
"alter resource pool general runtimecap NONE "
ALTER RESOURCE POOL
Time: First fetch (0 rows): 11.148 ms. All rows formatted: 11.383 ms
So it works in my case. Your line:
if errorlevel 1 echo Error!! But this is never displayed.
... never echoes anything because the previous line, with echo will return 0 to the shell, overriding the previous errorlevel.
Try it command by command on your Windows command prompt, and see what happens. Just echo %errorlevel%, without evaluating it.
And I notice that you are trying to export to CSV format. Then, try this:
Format the output unaligned (-A)
set the field separator to comma (-F ',')
remove the footer '(n rows)' (-P footer)
limit the output to 5 rows in the query for test
(I show the output before redirecting to file):
marco ~/1/Vertica/supp $ vsql -A -F ',' -P footer -c "select * from one_million_rows limit 5"
id,id_desc,dob,category,busid,revenue
0,0,1950-01-01,1,====== boss ========,0.000
1,-1,1950-01-02,2,kbv-000001kbv-000001,0.010
2,-2,1950-01-03,3,kbv-000002kbv-000002,0.020
3,-3,1950-01-04,4,kbv-000003kbv-000003,0.030
4,-4,1950-01-05,5,kbv-000004kbv-000004,0.040
Not aligning is much faster than aligning.
Then, as you spend most time in the fetching of the rows (that's because you get a timeout in the middle of an output file write process), try fetching more rows at a time than the default 1000. You will need to play with the value, depending on the network settings at your site until you get your best value:
-v ROWS_AT_A_TIME=10000
Once you're happy with the tested output, try this command (change the SELECT for your needs, of course ....):
marco ~/1/Vertica/supp $ vsql -A -F ',' -P footer \
-v ON_ERROR_STOP=ON -v ROWS_AT_A_TIME=10000 -o one_million_rows.csv \
-c "select * from one_million_rows"
marco ~/1/Vertica/supp $ wc -l one_million_rows.csv
1000001 one_million_rows.csv
The table actually contains one million rows. Note the line count in the file: 1,000,001. That's the title line included, but the footer (1000000 rows) removed.
I made an script (findx.h) that doesn't have any problem when i ran it on Solaris server via console (bash-3.2$ ./findx.sh)
The problem appears when i try to run it from a windows Qt app using QProcess (code below) where it doesn't display the ouput of the command.
I tried little variations and appear to show data when just use one pipe instead of two. But i need the two: grep and ggrep.
//findx.h in solaris
//WHAT WORKS
#!/bin/bash
echo pass | sudo -S /usr/sbin/snoop -x0 -ta HSM1000 port 1000
//WHAT I WANT
#!/bin/bash
echo pass | sudo -S /usr/sbin/snoop -x0 -ta HSM1000 port 1000 | /usr/sfw/bin/ggrep -A 2 KR01
//Qt on windows
QString commands="(";
commands +="source setpath.sh";
commands +=";/path/to/script/findx.sh";
commands +=")";
this->logged=false;
QString program = "plink.exe";
QStringList arguments;
arguments <<"-ssh"
<<ip
<<"-l"
<<user
<<"-pw"
<<pass
<<commands;
this->myProcess=new QProcess(this);
connect(this->myProcess,SIGNAL(started()),
this, SLOT(onprocess_started()));
connect(this->myProcess, SIGNAL(errorOccurred(QProcess::ProcessError)),
this, SLOT(onprocess_errorOcurred(QProcess::ProcessError)));
connect(this->myProcess, SIGNAL(finished(int, QProcess::ExitStatus)),
this, SLOT(onprocess_finished(int, QProcess::ExitStatus)));
connect(this->myProcess, SIGNAL(readyReadStandardError()),
this, SLOT(onprocess_readyReadStandardError()));
connect(this->myProcess, SIGNAL(readyReadStandardOutput()),
this, SLOT(onprocess_readyReadStandardOutput()));
connect(this->myProcess, SIGNAL(stateChanged(QProcess::ProcessState)),
this, SLOT(onprocess_stateChanged(QProcess::ProcessState)));
this->myProcess->start(program, arguments);
this->ui->labStatus->setText("Starting");
return 0;
// How i read, i do the same for stderr and put it also in plainOutput
QByteArray err=this->myProcess->readAllStandardOutput();
QString m="Standard output:"+QString(err.data());
this->ui->plainOutput->appendPlainText(m);
please any advice would be useful.
Thanks in advance.
I'm using Nagios Core 4.3.4. Is there any way to monitor the number of users connected to the server RDP on a Windows server like nrpe check_users? Please tell me if you have.
you would have to write your own check for this.
In your check you could call a powershell script on the server (but it depends on your windows version):
ipmo RemoteDesktop # 1. import the remotedesktop module
$(Get-RDUserSession).count # 2. print the count of the session
But there is another approach mentioned on monitoring-portal.org site. It's in german, so I try to translate:
1.) read window performance counters with nsclient:
c:\program files\nsclient\nsclient++.exe -noboot CheckSystem listpdh >counters_list.txt
2.) define the command (where -s $USER7$ is the passphrase to establishe the connection
define command{
command_name check_nt_Counter_User
command_line $USER1$/check_nt -H $HOSTADDRESS$ -s $USER7$ -p 12489 -v COUNTER -l $ARG1$ -w $ARG2$ -c $ARG3$ -d SHOWALL
}
3.) define the service
define service{
service_description RDP-Sessions
host_name TerminalSrv
use sometemplate
check_command check_nt_Counter_User!"\\Terminalservices\\active sessions","RDP-User active","users"!18!20
notes get count of active sessions
process_perf_data 1
notifications_enabled 0
}
Right now i'm using the followin code:
while read num;
do M=$(curl "myurl/$num")
echo "$M"
done < s.txt
where s.txt contains a list (1 per line) of a part of the url.
Is it correct to assume that curl is running sequentially?
Or is it running in thread/jobs/multiple conn at a time?
I've found this online:
parallel -k curl -s "http://example.com/locations/city?limit=100\&offset={}" ::: $(seq 100 100 30000) > out.txt
The problem is that my sequence is coming from a file or from a variable (one element per line) and i can't adapt it to my needs
I've not fully understood how to pass the list to parallel
Should i save all the curl commands in the list and run it with parallel -a ?
Regards,
parallel -j100 -k curl myurl/{} < s.txt
Consider spending an hour walking through man parallel_tutorial. Your command line will love you for it.
Can you help me, why I get sometimes (50:50):
webkit_server.NoX11Error: Cannot connect to X. You can try running with xvfb-run.
When I start the script in parallel as:
xvfb-run -a python script.py
You can reproduce this yourself like so:
for ((i=0; i<10; i++)); do
xvfb-run -a xterm &
done
Of the 10 instances of xterm this starts, 9 of them will typically fail, exiting with the message Xvfb failed to start.
Looking at xvfb-run 1.0, it operates as follows:
# Find a free server number by looking at .X*-lock files in /tmp.
find_free_servernum() {
# Sadly, the "local" keyword is not POSIX. Leave the next line commented in
# the hope Debian Policy eventually changes to allow it in /bin/sh scripts
# anyway.
#local i
i=$SERVERNUM
while [ -f /tmp/.X$i-lock ]; do
i=$(($i + 1))
done
echo $i
}
This is very bad practice: If two copies of find_free_servernum run at the same time, neither will be aware of the other, so they both can decide that the same number is available, even though only one of them will be able to use it.
So, to fix this, let's write our own code to find a free display number, instead of assuming that xvfb-run -a will work reliably:
#!/bin/bash
# allow settings to be updated via environment
: "${xvfb_lockdir:=$HOME/.xvfb-locks}"
: "${xvfb_display_min:=99}"
: "${xvfb_display_max:=599}"
# assuming only one user will use this, let's put the locks in our own home directory
# avoids vulnerability to symlink attacks.
mkdir -p -- "$xvfb_lockdir" || exit
i=$xvfb_display_min # minimum display number
while (( i < xvfb_display_max )); do
if [ -f "/tmp/.X$i-lock" ]; then # still avoid an obvious open display
(( ++i )); continue
fi
exec 5>"$xvfb_lockdir/$i" || continue # open a lockfile
if flock -x -n 5; then # try to lock it
exec xvfb-run --server-num="$i" "$#" || exit # if locked, run xvfb-run
fi
(( i++ ))
done
If you save this script as xvfb-run-safe, you can then invoke:
xvfb-run-safe python script.py
...and not worry about race conditions so long as no other users on your system are also running xvfb.
This can be tested like so:
for ((i=0; i<10; i++)); do xvfb-wrap-safe xchat & done
...in which case all 10 instances correctly start up and run in the background, as opposed to:
for ((i=0; i<10; i++)); do xvfb-run -a xchat & done
...where, depending on your system's timing, nine out of ten will (typically) fail.
This questions was asked in 2015.
In my version of xvfb (2:1.20.13-1ubuntu1~20.04.2), this problem has been fixed.
It looks at /tmp/.X*-lock to find an available port, and then runs Xvfb. If Xvfb fails to start, it finds a new port and retries, up to 10 times.