tar command Option Not able to understand - stdout

Check the Size of the tar, tar.gz and tar.bz2 Archive File.
To check the size of any tar, tar.gz and tar.bz2 archive file, use the following command. For example the below command will display the size of archive file in Kilobytes (KB).
tar -czf - tecmint-14-09-12.tar | wc -c
12820480
tar -czf - MyImages-14-09-12.tar.gz | wc -c
112640
tar -czf - Phpfiles-org.tar.bz2 | wc -c
20480
What does " - " do in this Command not able to find anything related to it in Official tar Documentation : ref 18

- is the option value for the -f option, which is the filename of the output file, normally a tgz filename.
- is special in that many tools accept it as an alias for stdout, which is in this case a pipe to wc -c. Remove the wc and you'll see it "messes up" your terminal
PS: it seems not documented in the tar man page, imo it should.

Related

Jenkins check file exists inside zip file

is there a way to check a file exists inside a zip file without unzip it. I'm using Artifactory . if use curl can't. can advice me,
I tried below
sh(script: "curl -o /dev/null -sf <antifactory url>")
this always return success
andbelow
unzip -l <file.zip> | grep -q <file name>
this need install unzip
From Artifactory 7.15.3 as mentioned in this page archive search is disabled by default. Can you confirm if you are version above this. If yes, you can enable this feature by Navigating to Admin > Artifactory > Archive Search Enabled and enable this checkbox. But please be aware that, if we enable this feature, it keeps writing a lot of information to the database for every archive file and can impact the performance.
Later you can search items in a zip file. Below is an example command where I am searching for .class files in my zip from curl. You may opt similar to this in Jenkis.
$ curl -uadmin:password -X GET "http://localhost:8082/artifactory/api/search/archive?name=*.class&repos=test" -H "Content-type: application/json"
You can make use of bash commands unzip and grep.
unzip -l my_file.zip | grep -q file_to_search
# Example
$ unzip -l 99bottles.zip | grep -q 99bottles.pdf && echo $?
0
P.S. If zip contains directory structure, then grep with full path of the file name

Is there any way to using hadolint for multiple dockerfiles?

Hadolint is an awesome tool for linting Dockerfiles. I am trying
to integrated to my CI but I am dealing with for run over multiple Dockerfiles. Does someone know how the syntax look like? Here is how my dirs appears to:
dir1/Dockerfile
dir2/Dockerfile
dir3/foo/Dockerfile
in gitlab-ci
stage: hadolint
image: hadolint/hadolint:latest-debian
script:
- mkdir -p reports
- |
hadolint dir1/Dockerfile > reports/dir1.json \
hadolint dir2/Dockerfile > reports/dir2.json \
hadolint dir3/foo/Dockerfile > reports/dir3.json
But the sample above is now working.
So as far as I found it, hadolint runs recursively. So in my case:
- hadolint */Dockerfile > reports/all_reports.json
But the problem with this approach is that all reports will be in one file which humper the maintenance and clarity
If you want to keep all reports separated (one per top-level directory), you may want to rely on some shell snippet?
I mean something like:
- |
find . -name Dockerfile -exec \
sh -c 'src=${1#./} && { set -x && hadolint "$1"; } | tee -a "reports/${src%%/*}.txt"' sh "{}" \;
Explanation:
find . -name Dockerfile loops over all Dockerfiles in the current directory;
-exec sh -c '…' runs a subshell for each Dockerfile, setting:
$0 = "sh" (dummy value)
$1 = "{}" (the full, relative path of the Dockerfile), "{}" and \; being directly related to the find … -exec pattern;
src=${1#./} trims the path, replacing ./dir1/Dockerfile with dir1/Dockerfile
${src%%/*} extracts the top-level directory name (dir1/Dockerfile → dir1)
and | tee -a … copies the output, appending hadolint's output to the top-level directory report file, for each parsed Dockerfile (while > … should be avoided here for obvious reasons, if you have several Dockerfiles in a single top-level directory).
I have replaced the .json extension with .txt as hadolint does not seem to output JSON data.

grep -f does not find files in loop

I want to use grep -f in a loop but it's not seeing the files I give to -f. My grep version from grep -V:
grep (GNU grep) 3.6
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Mike Haertel and others; see
<https://git.sv.gnu.org/cgit/grep.git/tree/AUTHORS>.
Example:
echo "line1" > searchfile.1
echo "line2" > searchfile.2
echo "line1" > targetfile
for file in `ls searchfile*`; do echo $file; ls $file; grep -f $file targetfile; done
gives the output
searchfile.1
ls: cannot access ''$'\033''[0m'$'\033''[32msearchfile.1'$'\033''[0m': No such file or directory
grep: searchfile.1: No such file or directory
searchfile.2
ls: cannot access ''$'\033''[32msearchfile.2'$'\033''[0m': No such file or directory
grep: searchfile.2: No such file or directory
But if I do it manually like
grep -f searchfile.1 targetfile
I get
line1
Any ideas what could be going on?
Don't parse ls output, use find:
find . -mindepth 1 -maxdepth 1 -name 'searchfile*' -exec echo {} \; -exec grep -f {} targetfile \;
Output:
./searchfile.2
./searchfile.1
line1
ls outputs other characters in addition to the file names (for the colored ls output). Plus, there may be whitespace in the file names (not in your case, though). See also:
Why not parse ls (and what to do instead)? - https://unix.stackexchange.com/q/128985/13411
Two alternatives to Timur Shatland's find solution.
Use shell wildcard expansion
for file in searchfile*; do ...
Work around the ls alias by putting it inside quotes
for file in `"ls" searchfile*`; do ...

Piping shasum to grep, but grep returning all lines of piped input, even ones that don't match

I'm trying to script the download of the node.js source and corresponding SHASUMS256.txt, checksum it, grep for OK, and return no results just exit code 0 on success using grep's -q flag:
wget http://nodejs.org/dist/latest/node-v0.10.26.tar.gz
wget http://nodejs.org/dist/latest/SHASUMS256.txt
sha256sum -c SHASUMS256.txt|grep -q OK
However, grep is returning a selection of the non-matching lines "no such file or directory" errors (though not all, confusingly):
> sha256sum -c SHASUMS256.txt|grep -q OK
sha256sum: node-v0.10.26-darwin-x64.tar.gz: No such file or directory
sha256sum: node-v0.10.26-darwin-x86.tar.gz: No such file or directory
sha256sum: node-v0.10.26-linux-x64.tar.gz: No such file or directory
sha256sum: node-v0.10.26-linux-x86.tar.gz: No such file or directory
sha256sum: node-v0.10.26-sunos-x64.tar.gz: No such file or directory
sha256sum: node-v0.10.26-sunos-x86.tar.gz: No such file or directory
sha256sum: node-v0.10.26-x86.msi: No such file or directory
sha256sum: node-v0.10.26.pkg: No such file or directory
sha256sum: node.exe: No such file or directory
Any idea what the problem is here? All I want from this script is return code 0 if the checksum succeeds (eg grep matches OK), or return code non-0 if it fails.
When you pipe the output of a command as input to other command, only stdout of first command is passed as stdin to the second command.
The lines you see are sent by the sha256sum program to stderr.
You can verify that by sending stderr of sha256sum command also to grep by
sha256sum -c SHASUMS256.txt 2>&1 |grep -q OK
Hope that helps.

Copy list of files with tar command. Bash

I have the directory A/a A/b where both a and b are files.
When I run these commands I get the outputs.
-> tar --include A -v -cf A.tar A
a A
a A/a
a A/b
-> tar --include A/a -v -cf A.tar A
I don`t understand in the 2nd evocation why file A/a is not archived. I believe I do not understand how include works.
I am trying to give tar a list and have it create an archive with the contents. Please help.
Thank you.

Resources