Jenkins check file exists inside zip file - jenkins

is there a way to check a file exists inside a zip file without unzip it. I'm using Artifactory . if use curl can't. can advice me,
I tried below
sh(script: "curl -o /dev/null -sf <antifactory url>")
this always return success
andbelow
unzip -l <file.zip> | grep -q <file name>
this need install unzip

From Artifactory 7.15.3 as mentioned in this page archive search is disabled by default. Can you confirm if you are version above this. If yes, you can enable this feature by Navigating to Admin > Artifactory > Archive Search Enabled and enable this checkbox. But please be aware that, if we enable this feature, it keeps writing a lot of information to the database for every archive file and can impact the performance.
Later you can search items in a zip file. Below is an example command where I am searching for .class files in my zip from curl. You may opt similar to this in Jenkis.
$ curl -uadmin:password -X GET "http://localhost:8082/artifactory/api/search/archive?name=*.class&repos=test" -H "Content-type: application/json"

You can make use of bash commands unzip and grep.
unzip -l my_file.zip | grep -q file_to_search
# Example
$ unzip -l 99bottles.zip | grep -q 99bottles.pdf && echo $?
0
P.S. If zip contains directory structure, then grep with full path of the file name

Related

wget command not found in git bash

I've already tried pip install wget in my cmd, which reads
>pip install wget
Requirement already satisfied: wget in c:\users\user\...\python\python38-32\lib\site-packages (3.2)
however when I try the command in git bash, it keeps showing
$ wget
bash: wget: command not found
I've made sure both the python file and the git file are in PATH.
What am I doing wrong here?
If you would like to use curl on Git Bash, here is an example:
$ curl -kLSs https://github.com/opscode/chef-repo/tarball/master -o master.tar.gz
$ ls master.tar.gz
master.tar.gz
-L follow redirects
-o (lower case O) to write output to file instead of stdout.
Ss silent mode, but show errors, if any
k allows curl to proceed and operate even for server connections otherwise considered insecure.
Reference: curl manpage.
With the command:
pip install wget
you installed this Python library https://pypi.org/project/wget/, so you can use that from inside Python:
import wget
I imagine what you actually want is to be able to use wget from inside Git bash. To do what, install Wget for Windows and add the executable to the path. Or, alternatively, use curl.
if you are just looking for having wget in the git bash without pip or any other dependency, you can follow the nice and quick tutorial from this page:
How to add more to Git Bash on Windows
the essence of it is:
Download wget binaries for Windows here (preferrably as ZIP) eternallybored
extract the wget.exe from the zip
copy the EXE file to your git bash binaries folder e.g. "c:\Program Files\Git\mingw64\bin"
done :)
Quick and dirty replacement for the single argument, fetch a file usecase:
alias wget='curl -O'
-O, --remote-name Write output to a file named as the remote file
Maybe give the alias a different name so you don't try to use wget flags in curl.

Permanently change PATH in Dockerfile with dynamic value

I am using security scan software in my Dockerfile and I need to add its bin folder to the path. Its path will contain the version part so I do not know the path until I download the software. My current progress is something like this:
1.Download the software:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
2.The desired folder is located: SAClientUtil/SAClientUtil.X.Y.Z/bin/ (xyz mary vary from run to run). Get there using find and cd combination and try to add it to the PATH:
RUN cd "$(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"; \
export PATH="$PATH:$PWD"; # doesn't work
Looks like ENV command is not evaluating the parameter, so
ENV PATH $PATH:"echo $(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"
doesn't work also.
Any ideas on how to dynamically add a folder to the PATH during docker image build?
If you're pretty sure the zip file will contain only a single directory with that exact layout, you can rename it to something fixed.
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip \
&& unzip SAClientUtil.zip -d tmp \
&& mv tmp/SAClientUtil.* SAClientUtil \
&& rm -rf tmp SAClientUtil.zip
ENV PATH=/SAClientUtil/bin:${PATH}
A simple solution would be to include a small wrapper script in your image, and then use that to run commands from the SAClientUtil directory. For example, if I have the following in saclientwrapper.sh:
#!/bin/sh
cmd=$1
shift
saclientpath=$(ls -d /SAClientUtil/SAClientUtil.*)
echo "got path: $saclientpath"
cd "$saclientpath"
exec "$saclientpath/bin/$cmd" "$#"
Then I can do this:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
COPY saclientwrapper.sh /saclientwrapper.sh
RUN sh /saclientwrapper.sh appscan.sh
And this will produce, when building the image:
STEP 6: RUN sh /saclientwrapper.sh appscan.sh
got path: /SAClientUtil/SAClientUtil.8.0.1374
COMMAND SYNTAX
appscan <command> [options]
ADDITIONAL COMMAND HELP
appscan help <command>
.
.
.

Merge these wget & egrep commands for recursive download of sitemap

I am trying to find a way to make these work together. Whereas I can run this successfully using Wget for Windows:
wget --html-extension -r http://www.sitename.com
this downloads every single file on my server that is directory linked from the root domain. I'd rather download only the pages in my sitemap. For this, I found the following trick which uses CygWin:
wget --quiet https://www.sitename.com/sitemap.xml --output-document - | egrep -o
"http://www\.sitename\.com[^<]+" | wget --spider -i - --wait 1
However this is only checking that the pages exist, not downloading them as static HTML files as the prior wget command is doing.
Is there a way to merge these and download the sitemap pages as local html files?
If you look at the man page for wget, you will see that the --spider entry is as follows:
--spider
When invoked with this option, Wget will behave as a Web spider, which means that it will not download the pages, just check that they are there.
All you need to do to actually download the file is remove the --spider from your command.
wget --quiet https://www.sitename.com/sitemap.xml --output-document - | egrep -o \
"https?://www\.sitename\.com[^<]+" | wget -i - --wait 1

Grep from wget without saving files

I am trying to download a site (with permission) and grepping a particular text from that. The problem is I want to grep on the go without saving any files on local drive. Following command does not help.
wget --mirror site.com -O - | grep TEXT
wget command manual (man page) tells, the usage of the command should be:
wget [option]... [URL]...
in your case, it should be:
wget --mirror -O - site.com|grep TXT
You can use curl:
curl -s http://www.site.com | grep TEXT
how about this
wget -qO- site.com |grep TEXT
and
curl -vs site.com 2>&1 |grep TEXT

Copy list of files with tar command. Bash

I have the directory A/a A/b where both a and b are files.
When I run these commands I get the outputs.
-> tar --include A -v -cf A.tar A
a A
a A/a
a A/b
-> tar --include A/a -v -cf A.tar A
I don`t understand in the 2nd evocation why file A/a is not archived. I believe I do not understand how include works.
I am trying to give tar a list and have it create an archive with the contents. Please help.
Thank you.

Resources