Let's say I using a nix program nixProgram which is located in some nix store location like
/nix/store/5zlrw36bpq6z1m3zjawwrwaryhmqjwbr-nixProgram-75.0/bin/nixProgram
Here I'm assuming nixProgram-75.0 is the name attribute of the derivation.
Question: Is there some well-known way to query what the name attribute of the nixProgram is over the Linux shell? Something like
$ nix-get-name nixProgram
nixProgram-75.0
?
You can use nix-store -qd to get the derivation that generated the store path. This derivation might have already been garbage collected in which case you're out of luck. Otherwise you can use nix show-derivation to serialize the derivation to JSON and use jq to get the name field.
#!/bin/sh
set -e
die() {
>&2 echo "$#"
exit 1
}
# Get the store path of $1
STORE_PATH=${1%*${1#/nix/store/*/}}
# Query the derivation which generates the store path
DRV=$(nix-store -qd "$STORE_PATH")
# Check if the derivation is still there
[ -f "$DRV" ] || die "You're out of luck, the derivation is no longer there"
# Serialize to JSON and use jq to get the name
nix show-derivation "$DRV" | jq -r '.[] | .env.name'
On my system:
$ ./nix-get-name /nix/store/4xb9z8vvk3fk2ciwqh53hzp72d0hx1da-bash-interactive-4.4-p23/bin/bash
bash-interactive-4.4-p23
Related
Like in this tutorial I want to see the derivation.
ls /nix/store/*.drv | head -n 1 | nix show-derivation
experimental Nix feature 'nix-command' is disabled; use '--extra-experimental-features nix-command' to override
I retry with the proposed change
ls /nix/store/*.drv | head -n 1 | nix show-derivation --extra-experimental-features nix-command
error: unable to find a flake before encountering filesystem boundary at '/mnt'
nix show-derivation expects its input as command line arguments.
Lacking any inputs, it seems to default to looking up a flake.
This would work, if it wasn't for *.drv producing too many results on my store:
$ nix show-derivation --extra-experimental-features nix-command $(ls /nix/store/*.drv | head -n 1)
bash: /run/current-system/sw/bin/ls: Argument list too long
If you have any .drv files, this works:
$ nix show-derivation --extra-experimental-features nix-command $(find /nix/store -maxdepth 1 -name '*.drv' | head -n 1)
Without any .drv files, $(...) produces no arguments, and that would be another way to get your error message.
For permanent effect add following line to your configuration file "~/.config/nix/nix.conf"
experimental-features = nix-command
Create the conf file if it doesn't exist. You can append other experimental features like flakes etc.
I was searching for a change that included "foreach" so I used this Mercurial command:
$ hg grep -r "user(mjh) & public() & date(-30)" --diff -i foreach
and it does return the hits where "foreach" was added and removed.
However, I'd like to know the actual commit hashes too. If I add a template:
$ hg grep ... -T '{date|shortdate}\n{node|short}\n{desc|firstline}\n\n'
then I get the commit hash and description as expected, but then I don't see the changed files listed.
Is there a template to capture the output of hg grep? The {files} template lists the files associated with a commit, but that's not the actual grep output. Is there an iterable template keyword available for the grep results?
Please, re-read carefully hg help grep -v (-v is important option), note the following part (new and unexpected for me also)
The following keywords are supported in addition to the common
template
keywords and functions. See also 'hg help templates'.
change String. Character denoting insertion "+" or removal "-".
Available if "--diff" is specified.
lineno Integer. Line number of the match.
path String. Repository-absolute path of the file.
texts List of text chunks.
After it you'll be able to repeat (so-so, because some details will differ slightly) default output of grep in you template
>hg grep --diff -i -r 1166 to_try
>hg grep --diff -i -r 1166 -T "{path}:{rev}:{change}:{texts}\n" to_try
hggit/compat.py:1166:-: for args in parameters_to_try:
hggit/compat.py:1166:+: for (args, kwargs) in parameters_to_try:
and after replacing {rev} by {node|short}
>hg grep --diff -i -r 1166 -T "{path}:{node|short}:{change}:{texts}\n" to_try
hggit/compat.py:f6cef55e6aeb:-: for args in parameters_to_try:
hggit/compat.py:f6cef55e6aeb:+: for (args, kwargs) in parameters_to_try:
OK, this might be a silly question. I've got the test.json file:
{
"timestamp": 1234567890,
"report": "AgeReport"
}
What I want to do is to extract timestamp and report values and store them in some env variables:
export $(cat test.json | jq -r '#sh "TIMESTAMP=\(.timestamp) REPORT=\(.report)"')
and the result is:
echo $TIMESTAMP $REPORT
1234567890 'AgeReport'
The problem is that those single quotes break other commands.
How can I get rid of those single quotes?
NOTE: I'm gonna leave the accepted answer as is, but see #Inian's answer for a better solution.
Why make it convoluted with using eval and have a quoting mess? Rather simply emit the variables by joining them with NULL (\u0000) and read it back in the shell environment
{
IFS= read -r -d '' TIMESTAMP
IFS= read -r -d '' REPORT
} < <(jq -r '(.timestamp|tostring) + "\u0000" + .report + "\u0000"' test.json)
This makes your parsing more robust by making the fields joined by NULL delimiter, which can't be part of your string sequence.
From the jq man-page, the #sh command converts its input to be
escaped suitable for use in a command-line for a POSIX shell.
So, rather than attempting to splice the output of jq into the shell's export command which would require carefully removing some quoting, you can generate the entire commandline inside jq, and then execute it with eval:
eval "$(
cat test.json |\
jq -r '#sh "export TIMESTAMP=\(.timestamp) REPORT=\(.report)"'
)"
I sometimes want to grep for a function to see examples of how it is used in context, eg. what sort of parameters it is called with. When I am doing this, the name of the file the match appears in becomes useless clutter. Is there any way to instruct grep to not include it? (Or a grep alternative that solves the same problem?)
You can tell grep not to indicate the filename in the output with the option -h:
-h, --no-filename
Suppress the prefixing of file names on output. This is the
default when there is only one file (or only standard input) to
search.
Test
$ echo "hello" > f1
$ echo "hello man" > f2
$ grep "hello" f*
f1:hello
f2:hello man
$ grep -h "hello" f*
hello
hello man
I am trying to use a shell script (well a "one liner") to find any common lines between around 50 files.
Edit: Note I am looking for a line (lines) that appears in all the files
So far i've tried grep grep -v -x -f file1.sp * which just matches that files contents across ALL the other files.
I've also tried grep -v -x -f file1.sp file2.sp | grep -v -x -f - file3.sp | grep -v -x -f - file4.sp | grep -v -x -f - file5.sp etc... but I believe that searches using the files to be searched as STD in not the pattern to match on.
Does anyone know how to do this with grep or another tool?
I don't mind if it takes a while to run, I've got to add a few lines of code to around 500 files and wanted to find a common line in each of them for it to insert 'after' (they were originally just c&p from one file so hopefully there are some common lines!)
Thanks for your time,
When I first read this I thought you were trying to find 'any common lines'. I took this as meaning "find duplicate lines". If this is the case, the following should suffice:
sort *.sp | uniq -d
Upon re-reading your question, it seems that you are actually trying to find lines that 'appear in all the files'. If this is the case, you will need to know the number of files in your directory:
find . -type f -name "*.sp" | wc -l
If this returns the number 50, you can then use awk like this:
WHINY_USERS=1 awk '{ array[$0]++ } END { for (i in array) if (array[i] == 50) print i }' *.sp
You can consolidate this process and write a one-liner like this:
WHINY_USERS=1 awk -v find=$(find . -type f -name "*.sp" | wc -l) '{ array[$0]++ } END { for (i in array) if (array[i] == find) print i }' *.sp
old, bash answer (O(n); opens 2 * n files)
From #mjgpy3 answer, you just have to make a for loop and use comm, like this:
#!/bin/bash
tmp1="/tmp/tmp1$RANDOM"
tmp2="/tmp/tmp2$RANDOM"
cp "$1" "$tmp1"
shift
for file in "$#"
do
comm -1 -2 "$tmp1" "$file" > "$tmp2"
mv "$tmp2" "$tmp1"
done
cat "$tmp1"
rm "$tmp1"
Save in a comm.sh, make it executable, and call
./comm.sh *.sp
assuming all your filenames end with .sp.
Updated answer, python, opens only each file once
Looking at the other answers, I wanted to give one that opens once each file without using any temporary file, and supports duplicated lines. Additionally, let's process the files in parallel.
Here you go (in python3):
#!/bin/env python
import argparse
import sys
import multiprocessing
import os
EOLS = {'native': os.linesep.encode('ascii'), 'unix': b'\n', 'windows': b'\r\n'}
def extract_set(filename):
with open(filename, 'rb') as f:
return set(line.rstrip(b'\r\n') for line in f)
def find_common_lines(filenames):
pool = multiprocessing.Pool()
line_sets = pool.map(extract_set, filenames)
return set.intersection(*line_sets)
if __name__ == '__main__':
# usage info and argument parsing
parser = argparse.ArgumentParser()
parser.add_argument("in_files", nargs='+',
help="find common lines in these files")
parser.add_argument('--out', type=argparse.FileType('wb'),
help="the output file (default stdout)")
parser.add_argument('--eol-style', choices=EOLS.keys(), default='native',
help="(default: native)")
args = parser.parse_args()
# actual stuff
common_lines = find_common_lines(args.in_files)
# write results to output
to_print = EOLS[args.eol_style].join(common_lines)
if args.out is None:
# find out stdout's encoding, utf-8 if absent
encoding = sys.stdout.encoding or 'utf-8'
sys.stdout.write(to_print.decode(encoding))
else:
args.out.write(to_print)
Save it into a find_common_lines.py, and call
python ./find_common_lines.py *.sp
More usage info with the --help option.
Combining this two answers (ans1 and ans2) I think you can get the result you are needing without sorting the files:
#!/bin/bash
ans="matching_lines"
for file1 in *
do
for file2 in *
do
if [ "$file1" != "$ans" ] && [ "$file2" != "$ans" ] && [ "$file1" != "$file2" ] ; then
echo "Comparing: $file1 $file2 ..." >> $ans
perl -ne 'print if ($seen{$_} .= #ARGV) =~ /10$/' $file1 $file2 >> $ans
fi
done
done
Simply save it, give it execution rights (chmod +x compareFiles.sh) and run it. It will take all the files present in the current working directory and will make an all-vs-all comparison leaving in the "matching_lines" file the result.
Things to be improved:
Skip directories
Avoid comparing all the files two times (file1 vs file2 and file2 vs file1).
Maybe add the line number next to the matching string
Hope this helps.
Best,
Alan Karpovsky
See this answer. I originally though a diff sounded like what you were asking for, but this answer seems much more appropriate.