Avoid the timeout in expect programming - timeout

Currently we have written a login method using expect programming. There it expects the password and if there is a timeout then it errors out. We have written the following code for that in tcl:
expect {
-i $var -re ".*(yes/no)." {
send -i $var "yes\r"
expect {
-i $var -re ".*pass" {
send -i $var "$pwd\r"
}
timeout {
puts "Check IP and Password ...timed out"
return 0
}
}
}
-i $var -re ".*pass" {
send -i $var "$pwd\r"
expect {
-i $var -re ".*Permission denied" {
exp_continue
}
-i $var -re "Permission denied" {
puts "login not succesful - Check IP and Password"
return 0
}
}
}
timeout {
puts "login not succesful, Check IP and Password ... timed out"
return 0
}
puts "Connection established."
Now we are observing the code is waiting for the timeout period to get over even if the login is successful, as a result it is consuming some time.
So can anyone suggest how to return the success as soon as the login happens instead of waiting for the timeout to expire?

With exp_continue, we can just handle this in a easy manner.
set prompt "#|>|\\\$"; # Some commonly used prompts
# We escaped dollar symbol with backslashes, to treat it as literal dollar
expect {
-i $var
timeout {puts "Timeout happened"; return 0}
"(yes/no)" {send -i $var "yes\r";exp_continue}
-re ".*pass" {send -i $var "$pwd\r";exp_continue}
"Permission denied" {puts "Permission denied";return 0;}
-re $prompt {puts "Login successful!!!";return 1}
}

Related

Need to verify/check ipv6 address using ping in lua script

i am not aware of lua script but i need some help.
Basically current lua script will receive structure.
in those structure has address parameter where will get two index parameters(ipv6 & ipv4) addresses.
lua script need to implement below case
ping ipv6 address and result will get store in local variable.
if local variable gets (ping success) will connect/call uv.tcp_connect for passed ipv6 address.
otherwise i will check the same for ipv4 address and try to connect/call uv.tcp_connect.
I am using online lua editor there its returning nil.
local results = load('ping -q -c1 -6 localhost 2>&1 >/dev/null && printf "IPv6: true" || (ping -q -c1 www.google.com 2>&1 >/dev/null && printf "IPv4 true" || printf "false")')
print(results)
output is:nil
and
if i am using in lua online editor ..
local handler = io.popen("ping -c 3 -i 0.5 www.google.com")-- wrong here.
local response = handler:read("*a")
print(response)
output error :
lua: main.lua:3: expected near '"ping -c 3 -i 0.5 www.google.com"'
kindly suggest me , am i missing something above.
To store output of system commands i suggest io.popen().
An example for conditional ping that tries first IPv6 and if fail IPv4...
> code.cmd
-- cmd(shell)
return function(shell)
return io.popen(shell, 'r'):read('a+')
end
> results={}
> results.ping=load(code.cmd)()('ping -q -c1 -6 localhost 2>&1 >/dev/null && printf "IPv6: true" || (ping -q -c1 localhost 2>&1 >/dev/null && printf "IPv4 true" || printf "false")')
> print(results.ping)
IPv6: true
...typed in a Lua console.
EDIT
Online Lua Environments dont support above code!

Docker password store in .docker/config.json

In .docker/config.json I see my password stored as QA==". My password ends with #.
{
"auths": {
"registry.nmlv.nml.com": {
"auth": "QA==",
"email": "foo#bar.com"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.4 (darwin)"
}
}
The auth property is actually base64 of username:password however my password base64 would end with QAo=. I am wondering how docker is changing the password?
I used base64 <<< # command
When you manually base64-encode strings at the command line, you need to be careful to not include a newline. echo -n is helpful for this.
$ echo -n '#' | base64
QA==
This matches what's in your .docker/config.json file. If I decode your other string
$ echo -n 'QAo=' | base64 -D | od -t x1
0000000 40 0a
it contains two bytes, ASCII 0x40 (#) and 0x0a (newline).

Unable connect Remote server thru SSH in Jenkins

Scenario: While connecting Server asking mountpoint details dynamic, so getting below error
Script
node('agent') {
stage('Sync Repo') {
sshagent(['poc_ssh_key']) {
sh """
ssh -p XXX user#IP $mountpoint(Data003)
"""
}
}
Error
error:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This system has been onboarded to TPAM. Please use the TPAM interface link below to request privileged access to the server.
!! TPAM URL:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please enter the root path variable [data001/data002/data003/data004/data005]
You entered:
After lot of invesgation , found below solution and it's working as expected.
node('agent') {
stage('Sync Repo') {
sshagent(['poc_ssh_key']) {
sh """
echo 'data003' | ssh -p 2022 user#IP
"""
}
}

Forcing mutual exclusion for multi branch pipeline builds

The following is an excerpt from a Jenkinsfile used in a multi-branch pipeline:
def GetNextFreePort() {
lock ('portProvider') {
def port = powershell(returnStdout: true, script: '((Get-NetTCPConnection | Sort-Object -Property LocalPort | Select-Object -Last 1).LocalPort) + 1')
}
return port.trim()
}
I'd like the line that gets the port number (on windows) to return a different port for each branch. However, despite using the lockable resources plugin, I cannot serialize access to the powershell callout that gets the next available port.
In the end I managed to achieve what I wanted to do via the Jenkins lock resource plugin. Here is my method to obtain an external port number and start the container:
def StartContainer() {
PORT_NUMBER = GetNextFreePort()
bat "docker run -e \"ACCEPT_EULA=Y\" -e \"SA_PASSWORD=P#ssword1\" --name ${CONTAINER_NAME} -d -i -p ${PORT_NUMBER}:1433 microsoft/mssql-server-linux:2017-GA"
powershell "While (\$((docker logs ${CONTAINER_NAME} | select-string ready | select-string client).Length) -eq 0) { Start-Sleep -s 1 }"
}
and here is the call to this which is wrapped by a call to lock:
stage('start container') {
steps {
RemoveContainer()
timeout(time: 20, unit: 'SECONDS') {
lock ('create SQL Server container') {
StartContainer()
}
}
}
}

GNU parallel sorted stdout and stderr

I've been using GNU parallel and I want to keep output order (--kepp-order), grouped by jobs (--grouped) but also with sorted stdout and stderr. Right now, the grouped options first print stdout and only after does it print stderr.
As an example, any way that these two commands give the same output?
seq 4 | parallel -j0 'sleep {}; echo -n start{}>&2; sleep {}; echo {}end'
seq 4 | parallel -j0 'sleep {}; echo -n start{} ; sleep {}; echo {}end'
thanks,
As per the comment to the other answer, to keep the output ordered, simply have parallel's bash invocation redirect stderr to stdout:
parallel myfunc '2>&1'
E.g.,
parallel -j8 eval \{1} -w1 \{2} '2>&1' ::: "traceroute -a -f9" traceroute6 ::: ordns.he.net one.one.one.one google-public-dns-a.google.com
You cannot do that if you still want stderr and stdout to be separated.
The reason for this is that stderr and stdout are buffered to 2 different files using buffered output.
But maybe you can explain a bit more on what you need this for. In that case there might be a solution.
Assuming that you don't have to use gnu parallel, and the main requirements are parallel execution with maintained ordered output of both stderr and stdout; we can create a solution that allows for the following example usage(plus providing return code), where you will have the results of the executions in a list, where each list element is in return a list of 3 strings: indexed as 0=stdout, 1=stderr and 2=return code.
source mapfork.sh
ArgsMap=("-Pn" "-p" "{}" "{}")
Args=("80" "google.com" "25" "tutanota.com" "80" "apa bepa")
declare -a Results=$(mapfork nmap "(${ArgsMap[*]#Q})" "(${Args[*]#Q})")
So, in order to print for example the stderr results, of the third destination ("apa bepa"), you can do:
declare -a res3="${Results[2]}"
declare -p res3
# declare -a res3=([0]=$'Starting Nmap 7.70 ( https://nmap.org ) at 2019-06-21 18:55 CEST\nNmap done: 0 IP addresses (0 hosts up) scanned in 0.09 seconds' [1]=$'Failed to resolve "apa bepa".\nWARNING: No targets were specified, so 0 hosts scanned.' [2]="0")
printf '%b\n' "${res3[1]}"
mapfork.sh is shown below. It is a bit complicated but it's parts have been explained in other answers so I won't provide the details here as well:
Capture both stdout and stderr in Bash [duplicate]
How can I make an array of lists (or similar) in bash?
#!/bin/bash
# reference: https://stackoverflow.com/questions/13806626/capture-both-stdout-and-stderr-in-bash
nullWrap(){
local -i i; i="$1"
local myCommand="$2"
local -a myCommandArgs="$3"
local myfifo="$4"
local stderr
local stdout
local stdret
. <(\
{ stderr=$({ stdout=$(eval "$myCommand ${myCommandArgs[*]#Q}"); stdret=$?; } 2>&1 ;\
declare -p stdout >&2 ;\
declare -p stdret >&2) ;\
declare -p stderr;\
} 2>&1)
local -a Arr=("$stdout" "$stderr" "$stdret")
printf "${i}:%s\u0000" "(${Arr[*]#Q})" > "$myfifo"
}
mapfork(){
local command
command="$1"
local -a CommandArgs="$2"
local -a Args="$3"
local -a PipedArr
local -i i
local myfifo=$(mktemp /tmp/temp.XXXXXXXX)
rm "$myfifo"
mkfifo "$myfifo"
local -a placeHolders=()
for ((i=0;i<${#CommandArgs[#]};i++)); do
[[ "${CommandArgs[$i]}" =~ ^\{\}$ ]] && placeHolders+=("$i") ;done
for ((i=0;i<${#Args[#]};i+=0)); do
# if we have placeholders in CommandArgs we need to take args
# from Args to replace.
if [[ ${#placeHolders[#]} -gt 0 ]]; then
for ii in "${placeHolders[#]}"; do
CommandArgs["$ii"]="${Args[$i]}"
i+=1; done; fi
nullWrap "$i" "$command" "(${CommandArgs[*]#Q})" "$myfifo" &
done
for ((i=0;i<${#Args[#]};i+=$(("${#placeHolders[#]}")))) ; do
local res
res=$(read -d $'\u0000' -r temp <"$myfifo" && printf '%b' "$temp")
local -i resI
resI="${res%%:*}"
PipedArr[$resI]="${res#*:}"
done
# reference: https://stackoverflow.com/questions/41966140/how-can-i-make-an-array-of-lists-or-similar-in-bash
printf '%s' "(${PipedArr[*]#Q})"
}

Resources