Expect getting stuck when inside a foreach after first iteration - foreach

Output is stuck after the first iteration. Works fine when only one expect exists within the loop. Using exp_continue as well fails when used.
#!/usr/bin/expect -f
exp_internal 1
set timeout -1
set passwords [list foo bar test]
set connected false
set passwordUsed "test"
spawn ssh -oHostKeyAlgorithms=+ssh-dss root#192.168.1.136 -y
foreach i $passwords {
expect "assword:" { send -- "$i\r";}
expect "asd" {send "test"}
}
expect eof
Output:
spawn ssh -oHostKeyAlgorithms=+ssh-dss root#192.168.1.136 -y
root#192.168.1.136's password:
root#192.168.1.136's password:
debug
parent: waiting for sync byte
parent: telling child to go ahead
parent: now unsynchronized from child
spawn: returns {1937}
expect: does "" (spawn_id exp4) match glob pattern "assword:"? no
expect: does "\r" (spawn_id exp4) match glob pattern "assword:"? no
expect: does "\rroot#192.168.1.136's password: " (spawn_id exp4) match glob pattern "assword:"? yes
expect: set expect_out(0,string) "assword:"
expect: set expect_out(spawn_id) "exp4"
expect: set expect_out(buffer) "\rroot#192.168.1.136's password:"
send: sending "foo\r" to { exp4 }
expect: does " " (spawn_id exp4) match glob pattern "asd"? no
expect: does " \r\n" (spawn_id exp4) match glob pattern "asd"? no
expect: does " \r\n\rroot#192.168.1.136's password: " (spawn_id exp4) match glob pattern "asd"? no
Then hangs on the last expect.

The timeout value is the problem: expect is waiting forever to see "asd"
You need something like this (untested)
set passwords {foo bar baz}
set idx 0
spawn ...
expect {
"assword:" {
if {$idx == [llength $passwords]} {
error "none of the passwords succeeded"
}
send "[lindex $passwords $idx]\r"
incr idx
exp_continue
}
"asd" {send "test"}
}
expect eof
exp_continue loops within the expect command to wait for "asd" or for "assword" to appear again.
The "asd" case does not use exp_continue, so after sending "test", this expect commend ends.

Related

"nawk if else if else" not working

ok, this is most probably going to sound like a stupid question to you, but I can't make it work and really don't know what I do wrong here even after reading quite a few nawk/awk help sites:
$ echo -e "hey\nthis\nworld" | nawk '{ if ( $1 !~ /e/ ) { print $0; } else if ($1 !~ /o/ ) { print $0; } else { print "condition not mached"; } }'
hey
this
world
$
I'd prefer to have it on one line but also tried on multiple lines as seen in various examples:
$ echo -e "hey\nthis\nworld" | nawk '{
if ( $1 !~ /e/ )
print $0;
else if ($1 !~ /o/ )
print $0;
else
print "condition not matched"
}'
hey
this
world
$
Thanks in advance for helping a nawk-newbie!
I simply want to have only printed lines not containing a certain pattern, here "e" or "o".
The final else I only added for testing-purpose.
You can make your life a lot easier by simply doing:
echo "hey\nthis\nworld" | nawk '$1 !~ /e|o/'
What is going wrong in your case is:
$ echo -e "hey\nthis\nworld" | nawk '{
if ( $1 !~ /e/ ) #'this' and 'world' satisfy this condition and so are printed
print $0;
else if ($1 !~ /o/ ) #Only 'hey' falls through to this test and passes and prints
print $0;
else
print "condition not matched"
}'
hey
this
world
$
FWIW the right way to do this is with a character list inside a ternary expression:
awk '{ print ($1 ~ /[eo]/ ? $0 : "condition not matched") }'
Going forward if you tag your questions with awk instead of just nawk (which is an old, non-POSIX and relatively redundant awk variant) they'll reach a much wider audience.

Spirit: Allowing a character at the begining but not in the middle

I'm triying to write a parser for javascript identifiers so far this is what I have:
// All this rules have string as attribute.
identifier_ = identifier_start
>>
*(
identifier_part >> -(qi::char_(".") > identifier_part)
)
;
identifier_part = +(qi::alnum | qi::char_("_"));
identifier_start = qi::char_("a-zA-Z$_");
This parser work fine for the list of "good identifiers" in my tests:
"x__",
"__xyz",
"_",
"$",
"foo4_.bar_3",
"$foo.bar",
"$foo",
"_foo_bar.foo",
"_foo____bar.foo"
but I'm having trouble with one of the bad identifiers: foo$bar. This is supposed to fail, but it success!! And the sintetized attribute has the value "foo".
Here is the debug ouput for foo$bar:
<identifier_>
<try>foo$bar</try>
<identifier_start>
<try>foo$bar</try>
<success>oo$bar</success>
<attributes>[[f]]</attributes>
</identifier_start>
<identifier_part>
<try>oo$bar</try>
<success>$bar</success>
<attributes>[[f, o, o]]</attributes>
</identifier_part>
<identifier_part>
<try>$bar</try>
<fail/>
</identifier_part>
<success>$bar</success>
<attributes>[[f, o, o]]</attributes>
</identifier_>
What I want is to the parser fails when parsing foo$bar but not when parsing $foobar.
What I'm missing?
You don't require that the parser needs to consume all input.
When a rule stops matching before the $ sign, it returns with success, because nothing says it can't be followed by a $ sign. So, you would like to assert that it isn't followed by a character that could be part of an identifier:
identifier_ = identifier_start
>>
*(
identifier_part >> -(qi::char_(".") > identifier_part)
) >> !identifier_start
;
A related directive is distinct from the Qi repository: http://www.boost.org/doc/libs/1_55_0/libs/spirit/repository/doc/html/spirit_repository/qi_components/directives/distinct.html

Unexpected behaviours with TCL/Expect and Cisco

I'm trying to log into a Cisco switch and run a list of commands.
Using the following code, I'm able to log into the device, enable, and configure terminal:
# Connect to single host, enable, and configure
proc connect {host payload username password enablepassword} {
send_user "Connecting to: $host $payload $username $password $enablepassword\n"
spawn ssh -o "StrictHostKeyChecking no" -l $username $host
# # Pardon the rudeness; some switches are upper case, some are lower case
expect "assword:"
send "$password\r"
# Switch to enable mode
expect ">"
send "en\r"
expect "assword:"
send "$enablepassword\r"
expect "*#"
send -- "conf t\r"
expect "config*#"
}
However, using the following code, I get the output below. ($payload contains a file which has one IOS command per line)
proc drop_payload {payload} {
set f [open "$payload"]
set payload [split [read $f] "\n"]
close $f
foreach pld $payload {
send -- "$pld\r"
expect "config*#"
sleep 2
}
}
My expectation is that this loop will iterate over each line in the file, however, the Expect debug (from exp_internal 1) is as follows:
HOST-0001#
expect: does " \r\HOST-0001#" (spawn_id exp7) match glob pattern "*#"? yes
expect: set expect_out(0,string) " \r\nHOST-0001#"
expect: set expect_out(spawn_id) "exp7"
expect: set expect_out(buffer) " \r\nHOST-0001#"
send: sending "conf t\r" to { exp7 }
expect: does "" (spawn_id exp7) match glob pattern "config*#"? no
c
expect: does "c" (spawn_id exp7) match glob pattern "config*#"? no
o
expect: does "co" (spawn_id exp7) match glob pattern "config*#"? no
n
expect: does "con" (spawn_id exp7) match glob pattern "config*#"? no
f
expect: does "conf" (spawn_id exp7) match glob pattern "config*#"? no
expect: does "conf " (spawn_id exp7) match glob pattern "config*#"? no
t
expect: does "conf t" (spawn_id exp7) match glob pattern "config*#"? no
expect: does "conf t\r\n" (spawn_id exp7) match glob pattern "config*#"? no
Enter configuration commands, one per line. End with CNTL/Z.
HOST-0001(config)#
expect: does "conf t\r\nEnter configuration commands, one per line. End with CNTL/Z.\r\nHOST-0001(config)#" (spawn_id exp7) match glob pattern "config*#"? yes
expect: set expect_out(0,string) "configuration commands, one per line. End with CNTL/Z.\r\nHOST-0001(config)#"
expect: set expect_out(spawn_id) "exp7"
expect: set expect_out(buffer) "conf t\r\nEnter configuration commands, one per line. End with CNTL/Z.\r\nHOST-0001(config)#"
}end: sending "no logging 172.x.x.20\r" to { exp0 no logging 172.x.x.20
expect: does "" (spawn_id exp0) match glob pattern "config*#"? no
expect: timed out
}end: sending "no logging 172.x.x.210\r" to { exp0 no logging 172.x.x.210
expect: does "" (spawn_id exp0) match glob pattern "config*#"? no
expect: timed out
}end: sending "no logging 172.x.x.9\r" to { exp0 no logging 172.x.x.9
expect: does "" (spawn_id exp0) match glob pattern "config*#"? no
expect: timed out
}end: sending "no logging 172.x.x.210\r" to { exp0 no logging 172.x.x.210
expect: does "" (spawn_id exp0) match glob pattern "config*#"? no
expect: timed out
}end: sending "no logging 172.x.x.20\r" to { exp0 no logging 172.x.x.20
expect: does "" (spawn_id exp0) match glob pattern "config*#"? no
expect: timed out
}end: sending "logging 172.x.x.50\r" to { exp0 logging 172.x.x.50
expect: does "" (spawn_id exp0) match glob pattern "config*#"? no
expect: timed out
I'm confused as to why it's trying to expect "conf t" which is being sent to the host; not received.
I'm also confused as to why any of the commands end after conf t is applied don't hit the switch, and time out instead.
You can try sending the configurations with the spwan_id
spawn ssh -o "StrictHostKeyChecking no" -l $username $host
#After process creation the process id will be saved in
#standard expect variable'spawn_id'
#Copying it to variable 'id'
set id $spawn_id
Now the variable 'id' is holding the reference to the ssh process. We can very well use the send and expect with the spawn id.
#Now we are setting the spawn id to our ssh process to make sure
#we are sending the commands to right process
#You can pass this variable 'id' as arg in 'drop_payload'
set spawn_id $id
foreach pld $payload {
send -- "$pld\r"
expect "config*#"
sleep 2
}
Or the other way around is as follows,
foreach pld $payload {
#This way is useful, when u want to send and expect to multiple process
#simultaneously.
send -i $id "$pld\r"
expect -i $id "config*#"
sleep 2
}
I found that each function/procedure was outputting to a new spawn ID.
One method is to follow Dinesh's advice and explicitly define the spawn id.
My workaround was to simply stuff everything into a single output procedure.

grep a block of text delimited by two key lines

I have a text file that contains text blocks roughly formatted like this:
Beginning of block
...
...
...
.........some_pattern.......
...
...
End of block
Beginning of block
...
... etc.
The blocks can have any number of lines but always start with the two delimiters. What I'd like to do is match "some_pattern" and print the whole block to stdout. With the example above, I would get this only:
Beginning of block
...
...
...
.........some_pattern.......
...
...
End of block
I've tried with something like this but without success:
grep "Beginning of block\n.*some_pattern.*\n.*End of block"
Any idea how to do this with grep? (or maybe with some other tool)
I guess awk is better for this:
awk '/Beginning of block/ {p=1};
{if (p==1) {a[NR]=$0}};
/some_pattern/ {f=1};
/End of block/ {p=0; if (f==1) {for (i in a) print a[i]};f=0; delete a}' file
Explanation
It just prints when the p flag is "active" and some_pattern is matched:
When it finds Beginning of block, then makes variable p=1 and starts storing the lines in the array a[].
If it finds some_pattern, it sets the flag f to 1, so that we know the pattern has been found.
When it finds End of block it resets p=0. If some_pattern had been found since the last Beginning of block, all the lines that had been stored are printed. Finally a[] is cleared and f is reset; we will have a fresh start when we again encounter Beginning of block.
Other test
$ cat a
Beginning of block
blabla
.........some_pattern.......
and here i am
hello
End of block
Beginning of block
...
... etc.
End of block
$ awk '/Beginning of block/ {p=1}; {if(p==1){a[NR]=$0}}; /some_pattern/ {f=1}; /End of block/ {p=0; if (f==1) {for (i in a) print a[i]}; delete a;f=0}' a
Beginning of block
blabla
.........some_pattern.......
and here i am
hello
End of block
The following might work for you:
sed -n '/Beginning of block/!b;:a;/End of block/!{$!{N;ba}};{/some_pattern/p}' filename
Not sure if I missed something but here is a simpler variation of one of the answers above:
awk '/Beginning of block/ {p=1};
/End of block/ {p=0; print $0};
{if (p==1) print $0}'
You need to print the input line in the End of Block case to get both delimiters.
I wanted a slight variation that doesn't print the delimiters. In the OP's question the delimiter pattern is simple and unique. Then the simplest is to pipe into | grep -v block. My case was more irregular, so I used the variation below. Notice the next statement so the opening block isn't printed by the third statement:
awk '/Beginning of block/ {p=1; next};
/End of block/ {p=0};
{if (p==1) print $0}'
Here's one way using awk:
awk '/Beginning of block/ { r=""; f=1 } f { r = (r ? r ORS : "") $0 } /End of block/ { if (f && r ~ /some_pattern/) print r; f=0 }' file
Results:
Beginning of block
...
...
...
.........some_pattern.......
...
...
End of block
sed -n "
/Beginning of block/,/End of block/ {
N
/End of block/ {
s/some_pattern/&/p
}
}"
sed is efficient for such a treatment
with grep, you certainly should pass through intermediary file or array.

How to search for a not equals value on a multi line nawk output

I have this current solution for CVS status managment:-
cvs -q status|awk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r[(NR-c+1)%b];print;c=a}b{r[NR%b]=$0}' b=1 a=9 s='(Locally Modified)|(Needs Patch)'
This gives me a display of Locally Modified files and files that need patching, which is great.
However a better solution for me that would catch all status is when the status is not equal to 'Up-to-date'.
I have tried s!= and s<> but it only seems to allow =.
A little whitespace will go a long way...
The opposite of $0 ~ s is $0 !~ s, so
cvs -q status | awk '
c-- > 0
$0 !~ s {
if (b)
for (c=b+1; c>1; c--)
print r[(NR-c+1)%b]
print
c=a
}
b {r[NR%b]=$0}
' b=1 a=9 s='Up-to-date'

Resources