I need to export data from a mongoDB where I have sub-documents and fields with spaces. I have tried several permutations such as:
mongoexport --db main --collection prices --fields _id,subdoc1.sum,subdoc1["field name 1"],subdoc1["field name 2"] --csv > out.dat
or
mongoexport --db main --collection prices --fields _id,subdoc1.sum,subdoc1."field name 1",subdoc1."field name 2" --csv > out.dat
There is no documentation on how to do this. Is this possible?
You can escape spaces without quotation marks on the command-line on unix-like operating systems using a backslash:
mongoexport --db main --collection prices --fields _id,subdoc1.sum,subdoc1.field\ name\ 1,subdoc1.field\ name\ 2 --csv >out.dat
And on Windows, (I think) you use the caret:
mongoexport --db main --collection prices --fields _id,subdoc1.sum,subdoc1.field^ name^ 1,subdoc1.field^ name^ 2 --csv >out.dat
Related
How to delete a tags in influxDB Cloud or influxDB explore? Just like the tags: scripts, thx
influx delete --bucket default \
--start '1970-01-01T00:00:00Z' \
--stop $(date +"%Y-%m-%dT%H:%M:%SZ")
--predicate "host=\"some-hostname\""
This will delete the data points for the host tag with the name "some-hostname".
I've not found a way to completely remove a tag
I have a fwcapture.cap file, which is used by Wireshark.
in it, there have many IP addresses source IPs and destination IPs.
How can I extract the unique IP addresses(no mater source or destination) as a list?
You can use tshark, which already in Wireshark installation.
tshark -T json -e 'ip.src' -e 'ip.dst' -r filename.pcap | grep '\.[0-9]' | sort -u
I've been using GNU parallel and I want to keep output order (--kepp-order), grouped by jobs (--grouped) but also with sorted stdout and stderr. Right now, the grouped options first print stdout and only after does it print stderr.
As an example, any way that these two commands give the same output?
seq 4 | parallel -j0 'sleep {}; echo -n start{}>&2; sleep {}; echo {}end'
seq 4 | parallel -j0 'sleep {}; echo -n start{} ; sleep {}; echo {}end'
thanks,
As per the comment to the other answer, to keep the output ordered, simply have parallel's bash invocation redirect stderr to stdout:
parallel myfunc '2>&1'
E.g.,
parallel -j8 eval \{1} -w1 \{2} '2>&1' ::: "traceroute -a -f9" traceroute6 ::: ordns.he.net one.one.one.one google-public-dns-a.google.com
You cannot do that if you still want stderr and stdout to be separated.
The reason for this is that stderr and stdout are buffered to 2 different files using buffered output.
But maybe you can explain a bit more on what you need this for. In that case there might be a solution.
Assuming that you don't have to use gnu parallel, and the main requirements are parallel execution with maintained ordered output of both stderr and stdout; we can create a solution that allows for the following example usage(plus providing return code), where you will have the results of the executions in a list, where each list element is in return a list of 3 strings: indexed as 0=stdout, 1=stderr and 2=return code.
source mapfork.sh
ArgsMap=("-Pn" "-p" "{}" "{}")
Args=("80" "google.com" "25" "tutanota.com" "80" "apa bepa")
declare -a Results=$(mapfork nmap "(${ArgsMap[*]#Q})" "(${Args[*]#Q})")
So, in order to print for example the stderr results, of the third destination ("apa bepa"), you can do:
declare -a res3="${Results[2]}"
declare -p res3
# declare -a res3=([0]=$'Starting Nmap 7.70 ( https://nmap.org ) at 2019-06-21 18:55 CEST\nNmap done: 0 IP addresses (0 hosts up) scanned in 0.09 seconds' [1]=$'Failed to resolve "apa bepa".\nWARNING: No targets were specified, so 0 hosts scanned.' [2]="0")
printf '%b\n' "${res3[1]}"
mapfork.sh is shown below. It is a bit complicated but it's parts have been explained in other answers so I won't provide the details here as well:
Capture both stdout and stderr in Bash [duplicate]
How can I make an array of lists (or similar) in bash?
#!/bin/bash
# reference: https://stackoverflow.com/questions/13806626/capture-both-stdout-and-stderr-in-bash
nullWrap(){
local -i i; i="$1"
local myCommand="$2"
local -a myCommandArgs="$3"
local myfifo="$4"
local stderr
local stdout
local stdret
. <(\
{ stderr=$({ stdout=$(eval "$myCommand ${myCommandArgs[*]#Q}"); stdret=$?; } 2>&1 ;\
declare -p stdout >&2 ;\
declare -p stdret >&2) ;\
declare -p stderr;\
} 2>&1)
local -a Arr=("$stdout" "$stderr" "$stdret")
printf "${i}:%s\u0000" "(${Arr[*]#Q})" > "$myfifo"
}
mapfork(){
local command
command="$1"
local -a CommandArgs="$2"
local -a Args="$3"
local -a PipedArr
local -i i
local myfifo=$(mktemp /tmp/temp.XXXXXXXX)
rm "$myfifo"
mkfifo "$myfifo"
local -a placeHolders=()
for ((i=0;i<${#CommandArgs[#]};i++)); do
[[ "${CommandArgs[$i]}" =~ ^\{\}$ ]] && placeHolders+=("$i") ;done
for ((i=0;i<${#Args[#]};i+=0)); do
# if we have placeholders in CommandArgs we need to take args
# from Args to replace.
if [[ ${#placeHolders[#]} -gt 0 ]]; then
for ii in "${placeHolders[#]}"; do
CommandArgs["$ii"]="${Args[$i]}"
i+=1; done; fi
nullWrap "$i" "$command" "(${CommandArgs[*]#Q})" "$myfifo" &
done
for ((i=0;i<${#Args[#]};i+=$(("${#placeHolders[#]}")))) ; do
local res
res=$(read -d $'\u0000' -r temp <"$myfifo" && printf '%b' "$temp")
local -i resI
resI="${res%%:*}"
PipedArr[$resI]="${res#*:}"
done
# reference: https://stackoverflow.com/questions/41966140/how-can-i-make-an-array-of-lists-or-similar-in-bash
printf '%s' "(${PipedArr[*]#Q})"
}
I tried recursively parsing email addresses from a directory of text/html files with xargs and grep but this command keep including the path (I just want the email addresses in my resulting emails.csv file).
find . -type f | xargs grep -E -o "\b[A-Za-z0-9._%+-]+#[A-Za-z0-9.-]+\.[A-Za-z]{2,6}\b" >> ~/emails.csv
Can you explain what's wrong with my grep command? I don't need this to be sorted or unique. I want to match all occurrences of email addresses in files. I need to use xargs cause I'm parsing emails in 20 GB worth of text files.
Thanks.
When you tell grep to search in more than one file, it prepends the corresponding filename to the search result. Try the following to see the effect...
First, search in a single file:
grep local /etc/hosts
# localhost is used to configure the loopback interface
127.0.0.1 localhost
Now search in two files:
grep local /etc/hosts /dev/null
/etc/hosts:# localhost is used to configure the loopback interface
/etc/hosts:127.0.0.1 localhost
To suppress the filename in which the match was found, add the -h switch to grep like this
grep -h <something> <somewhere>
I want to parse the standard header outputs of tshark. Since the default doesn't work, I am using a custom field parser that does almost the same thing. What I am missing is the resolution of the name of the protocol. My command is:
sudo tshark -b 256 -P -T fields -e frame.time_epoch -e ip.src -e ip.dst -e ip.proto -e ip.len -e col.Info -E separator=';' -b filesize:65535 -b files:10 -w tshark_tmp
This almost works, what I get is (this example is capturing two pings):
1378869929.862628000;192.168.78.252;192.168.78.53;1;84;Echo (ping) request id=0x0abe, seq=65/16640, ttl=64
1378869929.863188000;192.168.78.53;192.168.78.252;1;84;Echo (ping) reply id=0x0abe, seq=65/16640, ttl=64 (request in 1)
The same two pings look like this in the normal, no custom field tshark:
0.000000 192.168.78.252 -> 192.168.78.53 ICMP 98 Echo (ping) request id=0x0abe, seq=13/3328, ttl=64
0.000707 192.168.78.53 -> 192.168.78.252 ICMP 98 Echo (ping) reply id=0x0abe, seq=13/3328, ttl=64 (request in 1)
The main difference that I need to solve is in mine I get 84 for the protocol, whereas tshark prints ICMP 98. I could implement my own lookup table, but there is a large number of protocols and tshark already knows how to decode them, I just need to figure out how to get that in my parsing.
As of the 1.11.x and 1.12 versions of tshark, the field names are _ws.col.Protocol and _ws.col.Info, instead of col.Protocol and col.Info.
Example:
tshark -T fields -e _ws.col.Protocol -e _ws.col.Info
Source: col.Protocol missing from tshark 1.11.3 and 1.12.0-rc2
Found the answer
-e col.Protocol
Like always happens, you work on a problem for days, post the question then find the answer.