Piping shasum to grep, but grep returning all lines of piped input, even ones that don't match - grep

I'm trying to script the download of the node.js source and corresponding SHASUMS256.txt, checksum it, grep for OK, and return no results just exit code 0 on success using grep's -q flag:
wget http://nodejs.org/dist/latest/node-v0.10.26.tar.gz
wget http://nodejs.org/dist/latest/SHASUMS256.txt
sha256sum -c SHASUMS256.txt|grep -q OK
However, grep is returning a selection of the non-matching lines "no such file or directory" errors (though not all, confusingly):
> sha256sum -c SHASUMS256.txt|grep -q OK
sha256sum: node-v0.10.26-darwin-x64.tar.gz: No such file or directory
sha256sum: node-v0.10.26-darwin-x86.tar.gz: No such file or directory
sha256sum: node-v0.10.26-linux-x64.tar.gz: No such file or directory
sha256sum: node-v0.10.26-linux-x86.tar.gz: No such file or directory
sha256sum: node-v0.10.26-sunos-x64.tar.gz: No such file or directory
sha256sum: node-v0.10.26-sunos-x86.tar.gz: No such file or directory
sha256sum: node-v0.10.26-x86.msi: No such file or directory
sha256sum: node-v0.10.26.pkg: No such file or directory
sha256sum: node.exe: No such file or directory
Any idea what the problem is here? All I want from this script is return code 0 if the checksum succeeds (eg grep matches OK), or return code non-0 if it fails.

When you pipe the output of a command as input to other command, only stdout of first command is passed as stdin to the second command.
The lines you see are sent by the sha256sum program to stderr.
You can verify that by sending stderr of sha256sum command also to grep by
sha256sum -c SHASUMS256.txt 2>&1 |grep -q OK
Hope that helps.

Related

Jenkins check file exists inside zip file

is there a way to check a file exists inside a zip file without unzip it. I'm using Artifactory . if use curl can't. can advice me,
I tried below
sh(script: "curl -o /dev/null -sf <antifactory url>")
this always return success
andbelow
unzip -l <file.zip> | grep -q <file name>
this need install unzip
From Artifactory 7.15.3 as mentioned in this page archive search is disabled by default. Can you confirm if you are version above this. If yes, you can enable this feature by Navigating to Admin > Artifactory > Archive Search Enabled and enable this checkbox. But please be aware that, if we enable this feature, it keeps writing a lot of information to the database for every archive file and can impact the performance.
Later you can search items in a zip file. Below is an example command where I am searching for .class files in my zip from curl. You may opt similar to this in Jenkis.
$ curl -uadmin:password -X GET "http://localhost:8082/artifactory/api/search/archive?name=*.class&repos=test" -H "Content-type: application/json"
You can make use of bash commands unzip and grep.
unzip -l my_file.zip | grep -q file_to_search
# Example
$ unzip -l 99bottles.zip | grep -q 99bottles.pdf && echo $?
0
P.S. If zip contains directory structure, then grep with full path of the file name

tar command Option Not able to understand

Check the Size of the tar, tar.gz and tar.bz2 Archive File.
To check the size of any tar, tar.gz and tar.bz2 archive file, use the following command. For example the below command will display the size of archive file in Kilobytes (KB).
tar -czf - tecmint-14-09-12.tar | wc -c
12820480
tar -czf - MyImages-14-09-12.tar.gz | wc -c
112640
tar -czf - Phpfiles-org.tar.bz2 | wc -c
20480
What does " - " do in this Command not able to find anything related to it in Official tar Documentation : ref 18
- is the option value for the -f option, which is the filename of the output file, normally a tgz filename.
- is special in that many tools accept it as an alias for stdout, which is in this case a pipe to wc -c. Remove the wc and you'll see it "messes up" your terminal
PS: it seems not documented in the tar man page, imo it should.

How to get path of directory in gunzip bundle

My gunzip structure is like something like this
archive.tgz/manager-34038240834402384/temp1
/temp2/temp4/temp5
/temp3/temp6
I just need the string 'manager-34038240834402384' without extracting whole bundle. How to get it?
I tried tar -tvf which gives me whole path, but I just want above string. How to get it?
tar --list -f archive.tgz | head -n 1

How to copy multiple files from container to host using docker cp

I want to use wildcard to select multiple files from a directory in a container and use docker cp to copy these files from container to docker host.
I couldn't find if support for using wildcard is available with docker cp yet or not.
docker cp fd87af99b650:/foo/metrics.csv* /root/metrices_testing/
This results with the error metrics.csv*: no such file or directory
I came across an example where for loop was used to select a few files and then sent to container, but i want to transfer files from container to host and want to do this on docker host itself as script is running on host only.
Using docker exec to select files first and then copying them using docker cp can be an option. But that is a 2 step process.
Can someone please help me do this in one step?
EDIT:
I tried this. A step close but still failing.
# for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*");
do docker cp SPSRS:$f /root/metrices_testing/;
done
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-08:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-09:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-10:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-11:45
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:00
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:15
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:30
: no such file or directory lstat /docker/overlay2/193d2ad0d8d087377e3b96cbfb672b0e39132ae5e961872127614c9396f8c068/merged/opt/SPS_18_5_R1/logs/metrics.csv.2018.07.10-12:45
In fact your solution can make your aims just need a little change:
for f in $(docker exec -it SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:$f /root/metrices_testing/; done
->
for f in $(docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*"); do docker cp SPSRS:`echo $f | sed 's/\r//g'` /root/metrices_testing/; done
This is because docker exec SPSRS bash -c "ls /opt/tpa/logs/metrics.csv*" will have \r in every matched string, so finally the cp can not find the files in container.
So, we use echo $f | sed 's/\r//g' to get rid of \r for every file name, this could make you work.
NOTE: for alpine, we need to use sh to replace bash, meanwhile, -it should be deleted to avoid colorful print in alpine introduce some invisible characters like ^[[0;0m, etc.
Docker cp command supports to copy folder with all the contents inside a folder
docker cp -a container-id:/opt/tpa/logs/ /root/testing/
In the above example copying files from container folder /opt/tpa/logs to local machine /root/testing/ folder. Here all the files inside /logs/ will be copied to local. The trick here is using -a option along with docker cp
Docker cp still doesn't support wildcards. You can however use them in a Dockerfile in the following way:
COPY hom* /mydir/ # adds all files starting with "hom"
COPY hom?.txt /mydir/ # ? is replaced with any single character, e.g., "home.txt"
Reference: https://docs.docker.com/engine/reference/builder/#copy
Run this inside the container:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
local archive="$(mktemp -t "export-XXXXX.tgz")"
tar czf "$archive" "$#" --checkpoint=.52428800
printf "docker exec %q cat %q | tar xvz -C .\n" "$(hostname)" "$archive"
fi
}
Then select the files you want to copy out:
dcp /foo/metrics.csv*
It'll create an archive inside of the container and spit out a command for you to run. Run that command on the host.
e.g.
docker exec 1c75ed99fa42 cat /tmp/export-x9hg6.tgz | tar xvz -C .
Or, I guess you could do it without the temporary archive:
dcp() {
if [ "$#" -eq 1 ]; then
printf "docker cp %q .\n" "$(hostname):$(readlink -e "$1")"
else
printf "docker exec %q tar czC %q" "$(hostname)" "$PWD"
printf " %q" "$#"
printf " | tar xzvC .\n"
fi
}
Will generate a command for you, like:
docker exec 1c75ed99fa42 tar czC /root .cache .zcompdump .zinit .zshrc .zshrc.d foo\ bar | tar xzvC .
You don't even need the alias then, it's just a convenience.
docker cp accepts either files, or tar archives, so you can pack the list of files provided as arguments to an tar archive, return the archive to stdout and pipe to docker cp.
#!/bin/bash
if [[ "$#" -lt 2 || "$1" == "-h" || "$1" == "--help" ]]; then
printf "Copy files to docker container directory.\n\n"
echo "Usage: $(basename $0) files... container:directory"
exit 0
fi
SOURCE="${*%${!#}}"
TARGET="${#:$#}"
tar cf - $SOURCE | docker cp - $TARGET

How to untar without leading slash /

I have tar file, 11.2.0.4.tar, of the following path: /oracle/product/11.2.0.4. I want to untar it in a custom folder, for example /export. I tried with the -C option, but didn't work - the shell hangs for a few seconds, then the prompt is returned without any result.
tar -xvf 11.2.0.4.tar -C /export
So, how do I extract without the "/" so I would have /export/oracle/product/11.2.0.4? I read there is an option only when taring, but not when untaring.

Resources