Jenkins cd to a folder with pattern? - jenkins

Here is the folder structure I have.
Workspace folder: D:\Node\MyNode\
When Jenkins build runs on a node, the files from scm gets downloaded to the following folder: D:\Node\MyNode\xx_development
I need to do cd to the folder "xx_development" and this name xx can change for different strasms (RTC) but "_development" remains same.
how can I do cd to a folder with (*development) using a pipeline script?
Edit: I am using windows Nodes for Jenkins.

To change the current directory to the folder with the pattern *_development, you can use the following script:
For Windows:
def folder = bat(returnStdout: true, script: 'dir /b /ad | findstr "_development"').trim()
bat "cd ${folder}"
dir /b /ad | findstr "_development" --> lists all directories in the current folder and filters them by the pattern _development.
/b --> to list only the directory names.
/ad --> to list only directories.
findstr --> to filter the output by the pattern _development.
The second line changes the current directory to the directory stored in the Folder variable.
For Linux:
def Folder = sh(returnStdout: true, script: 'ls -d */ | grep "_development"').trim()
sh "cd ${Folder}"
ls -d */ | grep "_development" --> lists all directories in the folder and filters by the pattern _development.
trim() --> If there are any leading or trailing whitespaces, they are removed using this command.
The second line changes the current directory to the folder stored in the Folder variable.

Related

Unix. Parse file with full paths to SHA256 checksums files. Run command in each path/file

I have a file file.txt with filenames ending with *.sha256, including the full paths of each file. This is a toy example:
file.txt:
/path/a/9b/x3.sha256
/path/7c/7j/y2.vcf.gz.sha256
/path/e/g/7z.sha256
Each line has a different path/file. The *.sha256 files have checksums.
I want to run the command "sha256sum -c" on each of these *.sha256 files and write the output to an output_file.txt. However, this command only accepts the name of the .sha256 file, not the name including its full path. I have tried the following:
while read in; do
sha256sum -c "$in" >> output_file.txt
done < file.txt
but I get:
"sha256sum: WARNING: 1 listed file could not be read"
which is due to the path included in the command.
Any suggestion is welcome
#!/bin/bash
while read in
do
thedir=$(dirname "$in")
thefile=$(basename "$in")
cd "$thedir"
sha256sum -c "$thefile" >>output_file.txt
done < file.txt
Modify your code to extract the directory and file parts of your in variable.

using “findFiles” in a Jenkinsfile with pattern matching more then one file by suffix , using glob ( ant glob)

i try to capture 2 file types using ant glob and “findFiles” in a Jenkinsfile
in my dir i have :
xxx.ipa
foo.plist
when i do :
files = findFiles(glob: '**/*.[ipa|plist]')
or
files = findFiles(glob: '**/*.ipa|*.plist')
I'm getting none, but when i do :
files = findFiles(glob: '**/*.ipa')
I do getting the xxx.ipa file .
im using jenkins findfiles which using ant glob to capture files by pattern
https://www.jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#findfiles-find-files-in-the-workspace
The problem is that glob is not a regex but Ant Style pattern.
So you either have to capture files separately or you could use a shell script with grep or find instead, e.g.:
def files = sh(returnStdout: true, script: 'find . -name "*.ipa" -o -name "*.plist"')

Delete lines of many files using grep and GNU parallel

I have a directory with many files that all end in "_all.txt". I want to delete all lines in each of these files containing either a "*" or a "-" and send them to files ending in "_all_cleaned.txt".
Right now I am using a for loop as follows:
for file in *_all.txt;
do
filename=$(echo $file | cut -d '_' -f 1)
grep -vwE "(*|-)" ${file}> "${filename}_all_cleaned.txt"
done
I would like to be able to do this in parallel using GNU parallel so that the command will be executed on each file on a different compute node instead of waiting for one node to do all in a row.
How can I incorporate
If the files are in the login dir on the servers (i.e. the dir you get by ssh server1 pwd):
parallel -Sserver1,server2 'grep -vwE "(*|-)" {} > {=s/.txt$/_cleaned.txt=}' ::: *.txt
If it is the same dir relative to $HOME (e.g. /home/me/my/dir):
parallel --wd . -Sserver1,server2 'grep -vwE "(*|-)" {} > {=s/.txt$/_cleaned.txt=}' ::: *.txt
If it is /different/dir:
parallel --wd /different/dir -Sserver1,server2 'grep -vwE "(*|-)" {} > {=s/.txt$/_cleaned.txt=}' ::: *.txt

Create symlinks instead of copy with maven-dependency-plugin : copy-dependencies

I work on a Maven project that need to copy mode than 10 GB of artifacts in a target repository from a maven local repository (after downloaded them).
In some cases (e.g. for tests), I'd like to replace this copy by a symlink creation in order to save few minutes.
My question is: Is there a way to ask to plugin maven-dependency-plugin goal copy-dependencies to create a symlink OR is there any maven plugin that can do it.
The copy-dependencies goal cannot, to my knowledge, do this out of the box. However, you can use a shell script:
#!/bin/sh
outputDir=target/dependency
mkdir -p "$outputDir"
mvn dependency:resolve |
grep ':\(compile\|runtime\)' | sed 's/\[INFO\] *//' |
while read gav
do
case "$gav" in
*:*:*:*:*:*) # G:A:P:C:V:S
g="${gav%%:*}"; remain="${gav#*:}"
a="${remain%%:*}"; remain="${remain#*:}"
p="${remain%%:*}"; remain="${remain#*:}"
c="${remain%%:*}"; remain="${remain#*:}"
v="${remain%%:*}"
s="${remain#*:}"
;;
*:*:*:*:*) # G:A:P:V:S
g="${gav%%:*}"; remain="${gav#*:}"
a="${remain%%:*}"; remain="${remain#*:}"
p="${remain%%:*}"; remain="${remain#*:}"
c=""
v="${remain%%:*}"
s="${remain#*:}"
;;
esac
g=$(echo "$g" | sed 's/\./\//g')
test -n "$c" && artName="$a-$v-$c" || artName="$a-$v"
ln -s "$HOME/.m2/repository/$g/$a/$v/$artName.$p" "$outputDir"
done

Extract text files in each subfolder and join them with the subfolder name

I have compressed text files in the following folder structure:
~/A/1/1.faa.tgz #each tgz file has dozens of faa text files
~/A/2/2.faa.tgz
~/A/3/3.faa.tgz
I would like to extract the faa files (text) from each tgz file and then join them using the subfoldername (1,2 and 3) to create a single text file for each subfolder.
My attempt was the following, but the files were extracted in the folder where I ran the script:
#!/bin/bash
for FILE in ~/A/*/*.faa.tgz; do
tar -vzxf "$FILE"
done
After extracting the faa files I would use "cat" to join them (for example, using cat *.faa > .txt.
Thanks in advance.
To extract:
#!/bin/bash
for FILE in ~/A/*/*.faa.tgz; do
echo "$FILE"
mkdir "`basename "$FILE"`"
tar -vzxf "$FILE" -C "`basename "$FILE"`"
done
To join:
#!/bin/bash
for dir in ~/A/*/; do
(
cd "$dir"
file=( *.faa )
cat "${file[#]}" > "${PWD##*/}.txt"
)
done

Resources