I have GPSD running on a Linux system (specifically SkyTraq Venus 6 on a Raspberry Pi 3, but that shouldn't matter). Is there a way to trigger a command when the GPS first acquires or loses the 3D fix, almost like the scripts in /etc/network/if-up.d and /etc/network/if-down.d?
I found a solution:
Step 1: With GPSD running, gpspipe -w outputs JSON data, documented here. The TPV class has a mode value, which can take one of these values:
0=unknown mode
1=no fix
2=2D fix
3=3D fix
Step 2: Write a little program called gpsfix.py:
#!/usr/bin/env python
import sys
import errno
import json
modes = {
0: 'unknown',
1: 'nofix',
2: '2D',
3: '3D',
}
try:
while True:
line = sys.stdin.readline()
if not line: break # EOF
sentence = json.loads(line)
if sentence['class'] == 'TPV':
sys.stdout.write(modes[sentence['mode']] + '\n')
sys.stdout.flush()
except IOError as e:
if e.errno == errno.EPIPE:
pass
else:
raise e
For every TPV sentence, gpspipe -w | ./gpsfix.py will print the mode.
Step 3: Use grep 3D -m 1 to wait for the first fix, and then quit (which sends SIGPIPE to all other processes in the pipe).
gpspipe -w | ./gpsfix.py | grep 3D -m 1 will print 3D on the first fix.
Step 4: Put in in a bash script:
#!/usr/bin/env bash
# Wait for first 3D fix
gpspipe -w | ./gpsfix.py | grep 3D -m 1
# Do something nice
cowsay "TARGET LOCATED"
And run it:
$ ./act_on_gps_fix.sh
3D
________________
< TARGET LOCATED >
----------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
Related
I have an nmap output looking like this
Nmap scan report for 10.90.108.82
Host is up (0.16s latency).
PORT STATE SERVICE
80/tcp open http
|_http-title: Did not follow redirect to https://10.90.108.82/view/login.html
I would like the output to be like
10.90.108.82 http-title: Did not follow redirect to https://10.90.108.82/view/login.html
How can it be done using grep or any other means?
You can use the following nmap.sh script like that:
<nmap_command> | ./nmap.sh
nmap.sh:
#!/usr/bin/env sh
var="$(cat /dev/stdin)"
file=$(mktemp)
echo "$var" > "$file"
ip_address=$(head -1 "$file" | rev | cut -d ' ' -f1 | rev)
last_line=$(tail -1 "$file" | sed -E "s,^\|_, ,")
printf "%s%s\n" "$ip_address" "$last_line"
rm "$file"
If you do not mind using a programming language, check out this code snippet with Python:
import nmapthon as nm
scanner = nm.NmapScanner('10.90.108.82', ports=[80], arguments='-sS -sV --script http-title')
scanner.run()
if '10.90.108.82' in scanner.scanned_hosts(): # Check if host responded
serv = scanner.service('10.90.108.82', 'tcp', 80)
if serv is not None: # Check if service was identified
print(serv['http-title'])
Do not forget to execute pip3 install nmapthon.
I am the author of the library, feel free to have a look here
Looks like you want an [nmap scan] output to be edited and displayed as you wish. Try bash scripting, code a bash script and run it.
Here's an link to a video where you might find an answer to your problem:
https://youtu.be/lZAoFs75_cs
Watch the video from the Time Stamp 1:27:17 where the creator briefly describes how to cut-short an output and display it as we wish.
If you require, I could code an bash script to execute an cut-shorted version of the output given by an nmap scan.
What is wrong with this Makefile?
I want to compile some lua files to check if there are any unexpected globals defined. I'm doing this by grepping the output of luac -l and then ignoring known globals.
So for a given lua file everything is OK if grep doesn't find anything, having ignored known lua globals.
As grep's return status code is 0 if it does find something and 1 if it doesn't I want to force an error if the status code from the grep is 0 and allow everything to continue if it isn't.
The Makefile is like this
IGNORE_GLOBALS = "dofile\|string\|tostring\|tonumber\|math\|io\|type\|os\|table\|pairs\|next\|require"
all: $(patsubst src/common/%.lua, %.lua, $(wildcard src/common/*.lua))
%.lua:
#echo check $#
#luac -l src/common/$# | grep '.ETGLOBAL' | grep -v $(IGNORE_GLOBALS) && $(error Unexpected globals in $#) || echo "No unexpected globals in $#"
But when I run it immediately quits on the first file, which happens to have no unexpected globals with
Makefile:10: *** Unexpected globals in chat-cmd.lua. Stop.
line 10 is surprisingly the line before, i.e.
#echo check $#
Interestingly if I replace $(error ...) with echo ..., as in
#luac -l src/common/$# | grep '.ETGLOBAL' | grep -v $(IGNORE_GLOBALS) && echo "Unexpected globals in $#" || echo "No unexpected globals in $#"
it behaves as intended.
As #siffiejoe says in the comment. $(error) is make function and is run when the recipe as a whole is being evaluated (you can think of it like hoisting if that helps).
So as soon as the recipe needs to be run (and the first line executed) the $(error) call is evaluated.
Note: In the shell X && Y || Z is not a ternary operation. Z will be run if X succeeds and Y fails as well as when X fails. This doesn't matter here as echo cannot really fail but in general is worth paying attention to.
You want to use something more like #! lua ... | grep -v $(IGNORE_GLOBALS) || { echo 'Unexpected globals in $#'; exit 1; } there. This doesn't spit out the "everything's ok" message but removes the X && Y || Z ternary issue.
If you wanted to keep that message the simplest thing to do would be to move to an actual if statement.
I've got a small script called "onewhich". Its purpose is to behave like which, except that it will only give the FIRST occurrence of any executables specified as options, as found in the order they'd appear in the path.
So for example, if my path is /opt/bin:/usr/bin:/bin, and I have both /opt/bin/runme and /usr/bin/runme, then the command onewhich runme would return /opt/bin/runme.
But if I also have a /usr/bin/doit, then the command onewhich doit runme would return /usr/bin/doit instead.
The idea is to walk through the path, check for each executable specified, and if it exists, show it and exit.
Here's the script so far.
#!/bin/sh
for what in "$#"; do
for loc in `echo "${PATH}" | awk -vRS=: 1`; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
The problem is, I want to be better about PATH directories with special characters. Every second shell question here on StackOverflow talks about how bad it is to parse paths with tools like awk and sed. There's even a bash faq entry about it. (Proviso: I'm not using bash for this, but the recommendation is still valid.)
So I tried rewriting the script to separate paths in a pipe, like this"
#!/bin/sh
for what in "$#"; do
echo "${PATH}" | awk -vRS=: 1 | while read loc ; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
I'm not sure if this gives me any real advantage (since $loc is still inside quotes), but it also doesn't work because for some reason, the exit 0 seems to be ignored. Or ... it exits something (the sub-shell with the while loop that terminates the pipe, maybe), but the script exits with a value of 1 every time.
What's a better way to step through directories in ${PATH} without the risk that special characters will confuse things?
Alternately, am I reinventing the wheel? Is there maybe a way to do this that's built in to existing shell tools?
This needs to run in both Linux and FreeBSD, which is why I'm writing it in Bourne instead of bash.
Thanks.
This doesn't directly answer your question, but does eliminate the need to parse PATH at all:
onewhich () {
for what in "$#"; do
which "$what" 2>/dev/null && break
done
}
This just calls which on each command on the input list until it finds a match.
To parse PATH, you can simply set `IFS=':'.
if [ "${IFS:-x}" = "${IFS-x}" ]; then
# Only preserve the value of IFS if it is currently set
OLDIFS=$IFS
fi
IFS=":"
for f in $PATH; do # Do not quote $PATH, to allow word splitting
echo $f
done
if [ "${OLDIFS:-x}" = "${OLDIFS-x}" ]; then
IFS=$OLDIFS
fi
The above will fail if any of the directories in PATH actually contain colons.
Your first method looks to me as if it should work. In practical terms, if it's really the $PATH you'll be searching, it's unlikely you'll have spaces and newlines embedded in directories there. If you do, it's probably time to refactor.
But still, I don't think you're at risk from the possibility of bad names clobbering your loop, since you're wrapping variables in quotes. At worst, I suspect you might miss the odd valid executable, but I can't see how the script would generate errors. (I don't see how the script would miss valid executables, and I haven't tested - I'm just saying I don't see problems at first glance.)
As for your second question, about the loop, I think you've hit the nail on the head. When you run a pipe like this | that | while condition; do things; done, the while loop runs in its own shell at the end of the pipe. Exiting that shell may terminate the actions of the pipe, but that only brings you back to the parent shell, which has its own thread of execution that terminates with exit 1.
As for a better way to do this, I would consider which.
#!/bin/sh
for what in "$#"; do
which "$what"
done | head -1
And if you really want the exit values as well:
#!/bin/sh
for what in "$#"; do
which "$what" && exit 0
done
exit 1
The second might even be fewer resources, as it doesn't have to open a file handle and pipe through head.
You can also split your path using IFS. For example, if you wanted to wrap your loops the other way around, you could do this:
#!/bin/sh
IFS=":"
for loc in $PATH; do
for what in "$#"; do
if [ -x "$loc"/"$what" ]; then
echo "$loc"/"$what"
exit 0
fi
done
done
exit 1
Note that under normal circumstances, you might want to save the old value of $IFS, but you seem to be doing things in a stand-alone script, so the "new" value gets thrown out when the script exits.
All the above code is untested. YMMV.
Another way to get around the need to parse PATH at all is to run the builtin type command in new shell with a stripped environment (i. e. there simply are no functions or aliases to look up; cf. env -i sh -c 'type cmd 2>/dev/null).
# using `cmd` instead of $(cmd) for portability
onewhich() {
ec=0 # exit code
for cmd in "$#"; do
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
path="`type "$cmd" 2>/dev/null`"
if [ X"$path" = "X" ]; then
printf "%s\n" "error: command \"${cmd}\" not found in PATH" 1>&2
exit 1
else
case "$path" in
*\ /*)
path="/${path#*/}"
printf "%s\n" "$path";;
*)
printf "%s\n" "error: no disk file: $path" 1>&2
exit 1;;
esac
exit 0
fi
' _ "$cmd"
[ $? != 0 ] && ec=1
done
[ $ec != 0 ] && return 1
}
onewhich awk ls sed
onewhich builtin
onewhich if
Since which on success returns two full command paths if two commands are specified as arguments, exit 0 in the first onewhich script above aborts the program prematurely. In addition, if two commands are specified as arguments to which, the exit code of which is set to 1 even if only one command lookup failed (cf. which awk sedxyz ls; echo $?). To mimic this behaviour of the which command it is necessary to toggle on/off two variables (cnt and nomatches below).
onewhich() (
IFS=":"
nomatches=0
for cmd in "$#"; do
cnt=0
for loc in $PATH ; do
if [ $cnt = 0 ] && [ -x "$loc"/"$cmd" ]; then
echo "$loc"/"$cmd"
cnt=1
fi
done
[ $cnt = 0 ] && nomatches=1
done
[ $nomatches = 1 ] && exit 1 || exit 0 # exit 1: at least one cmd was not in PATH
)
onewhich awk ls sed
onewhich awk lsxyz sed
onewhich builtin
onewhich if
I am trying to use a shell script (well a "one liner") to find any common lines between around 50 files.
Edit: Note I am looking for a line (lines) that appears in all the files
So far i've tried grep grep -v -x -f file1.sp * which just matches that files contents across ALL the other files.
I've also tried grep -v -x -f file1.sp file2.sp | grep -v -x -f - file3.sp | grep -v -x -f - file4.sp | grep -v -x -f - file5.sp etc... but I believe that searches using the files to be searched as STD in not the pattern to match on.
Does anyone know how to do this with grep or another tool?
I don't mind if it takes a while to run, I've got to add a few lines of code to around 500 files and wanted to find a common line in each of them for it to insert 'after' (they were originally just c&p from one file so hopefully there are some common lines!)
Thanks for your time,
When I first read this I thought you were trying to find 'any common lines'. I took this as meaning "find duplicate lines". If this is the case, the following should suffice:
sort *.sp | uniq -d
Upon re-reading your question, it seems that you are actually trying to find lines that 'appear in all the files'. If this is the case, you will need to know the number of files in your directory:
find . -type f -name "*.sp" | wc -l
If this returns the number 50, you can then use awk like this:
WHINY_USERS=1 awk '{ array[$0]++ } END { for (i in array) if (array[i] == 50) print i }' *.sp
You can consolidate this process and write a one-liner like this:
WHINY_USERS=1 awk -v find=$(find . -type f -name "*.sp" | wc -l) '{ array[$0]++ } END { for (i in array) if (array[i] == find) print i }' *.sp
old, bash answer (O(n); opens 2 * n files)
From #mjgpy3 answer, you just have to make a for loop and use comm, like this:
#!/bin/bash
tmp1="/tmp/tmp1$RANDOM"
tmp2="/tmp/tmp2$RANDOM"
cp "$1" "$tmp1"
shift
for file in "$#"
do
comm -1 -2 "$tmp1" "$file" > "$tmp2"
mv "$tmp2" "$tmp1"
done
cat "$tmp1"
rm "$tmp1"
Save in a comm.sh, make it executable, and call
./comm.sh *.sp
assuming all your filenames end with .sp.
Updated answer, python, opens only each file once
Looking at the other answers, I wanted to give one that opens once each file without using any temporary file, and supports duplicated lines. Additionally, let's process the files in parallel.
Here you go (in python3):
#!/bin/env python
import argparse
import sys
import multiprocessing
import os
EOLS = {'native': os.linesep.encode('ascii'), 'unix': b'\n', 'windows': b'\r\n'}
def extract_set(filename):
with open(filename, 'rb') as f:
return set(line.rstrip(b'\r\n') for line in f)
def find_common_lines(filenames):
pool = multiprocessing.Pool()
line_sets = pool.map(extract_set, filenames)
return set.intersection(*line_sets)
if __name__ == '__main__':
# usage info and argument parsing
parser = argparse.ArgumentParser()
parser.add_argument("in_files", nargs='+',
help="find common lines in these files")
parser.add_argument('--out', type=argparse.FileType('wb'),
help="the output file (default stdout)")
parser.add_argument('--eol-style', choices=EOLS.keys(), default='native',
help="(default: native)")
args = parser.parse_args()
# actual stuff
common_lines = find_common_lines(args.in_files)
# write results to output
to_print = EOLS[args.eol_style].join(common_lines)
if args.out is None:
# find out stdout's encoding, utf-8 if absent
encoding = sys.stdout.encoding or 'utf-8'
sys.stdout.write(to_print.decode(encoding))
else:
args.out.write(to_print)
Save it into a find_common_lines.py, and call
python ./find_common_lines.py *.sp
More usage info with the --help option.
Combining this two answers (ans1 and ans2) I think you can get the result you are needing without sorting the files:
#!/bin/bash
ans="matching_lines"
for file1 in *
do
for file2 in *
do
if [ "$file1" != "$ans" ] && [ "$file2" != "$ans" ] && [ "$file1" != "$file2" ] ; then
echo "Comparing: $file1 $file2 ..." >> $ans
perl -ne 'print if ($seen{$_} .= #ARGV) =~ /10$/' $file1 $file2 >> $ans
fi
done
done
Simply save it, give it execution rights (chmod +x compareFiles.sh) and run it. It will take all the files present in the current working directory and will make an all-vs-all comparison leaving in the "matching_lines" file the result.
Things to be improved:
Skip directories
Avoid comparing all the files two times (file1 vs file2 and file2 vs file1).
Maybe add the line number next to the matching string
Hope this helps.
Best,
Alan Karpovsky
See this answer. I originally though a diff sounded like what you were asking for, but this answer seems much more appropriate.
I'm piping some output of a command to egrep, which I'm using to make sure a particular failure string doesn't appear in.
The command itself, unfortunately, won't return a proper non-zero exit status on failure, that's why I'm doing this.
command | egrep -i -v "badpattern"
This works as far as giving me the exit code I want (1 if badpattern appears in the output, 0 otherwise), BUT, it'll only output lines that don't match the pattern (as the -v switch was designed to do). For my needs, those lines are the most interesting lines.
Is there a way to have grep just blindly pass through all lines it gets as input, and just give me the exit code as appropriate?
If not, I was thinking I could just use perl -ne "print; exit 1 if /badpattern/". I use -n rather than -p because -p won't print the offending line (since it prints after running the one-liner). So, I use -n and call print myself, which at least gives me the first offending line, but then output (and execution) stops there, so I'd have to do something like
perl -e '$code = 0; while (<>) { print; $code = 1 if /badpattern/; } exit $code'
which does the whole deal, but is a bit much, is there a simple command line switch for grep that will just do what I'm looking for?
Actually, your perl idea is not bad. Try:
perl -pe 'END { exit $status } $status=1 if /badpattern/;'
I bet this is at least as fast as the other options being suggested.
$ tee /dev/tty < ~/.bashrc | grep -q spam && echo spam || echo no spam
How about doing a redirect to /dev/null, hence removing all lines, but you still get the exit code?
$ grep spam .bashrc > /dev/null
$ echo $?
1
$ grep alias .bashrc > /dev/null
$ echo $?
0
Or you can simply use the -q switch
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit
immediately with zero status if any match is found, even if an
error was detected. Also see the -s or --no-messages option.
(-q is specified by POSIX.)