I am executing parameterised build in jenkins to count no. of lines in file which has 1 file parameter. Its file location is pqr. The name of the script file is linecount.sh which is saved at remote server. When i tried to execute it using command sh linecount.sh filename, it works perfectly from jenkins. But as i remove filename from the argument and execute same script as parameterised build it is showing below error on console :
Started by user Prasoon Gupta
[EnvInject] - Loading node environment variables.
Building in workspace users/Prasoon/sample_programs
Copying file to pqr
[sample_programs] $ /bin/sh -xe /tmp/hudson3529902665956638862.sh
+ sh linecount.sh
PRASOON4
linecount.sh: line 15: parameterBuild.txt: No such file or directory
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I am uploading file (parameterBuild.txt) from my local machine. Why is it giving this error?
My doubt is in shell script I used argument as $1. How can I refer this when I am taking file as parameter.
The uploaded file will not retain the same name as it has on your local computer. It will be named after the File location argument specified in the file parameter settings:
In this example I will get a file called file.txt in my workspace root, regardless of what I call it on my computer.
So if I now build my job and enter the following in the parameter dialog (note that my local filename is table.html):
Then I get the following in the log (I have a build step which does ls -l):
Building on master in workspace /var/lib/jenkins/workspace/fs
Copying file to file.txt
[fs] $ /bin/sh -xe /tmp/hudson845437350739055843.sh
+ ls -l
total 4
-rw-r--r-- 1 jenkins jenkins 292 Feb 15 07:23 file.txt
Finished: SUCCESS
Note that table.html now is called file.txt, e.g. what I entered as File location.
So in you're case the command should be:
sh linecount.sh pqr
There is a a bug since ages that makes impossible to use fileParameter:
Handle file parameters
file parameter not working in pipeline job
There is a workaround for this issue https://github.com/janvrany/jenkinsci-unstashParam-library
and in a pipeline script you do:
library "jenkinsci-unstashParam-library"
node {
def file_in_workspace = unstashParam "file"
sh "cat ${file_in_workspace}"
}
If it's to do with Free-Style job & if your configuration looks similar to this - https://i.stack.imgur.com/vH7mQ.png then you can run simply do sh linecount.sh ${pqr} to get what you are looking for?
Related
How does one create test result files for bitbucket pipelines?
My bitbucket bitbucket-pipelines.yml contains:
options:
docker: true
pipelines:
default:
- step:
name: test bitbucket pipelines stuff..
script: # Modify the commands below to build your repository.
- /bin/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
and when running this pipeline i get
/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
<1s
+ /bin/bash -c 'mkdir test-results; echo Error Hello, World >> test-results/test1.txt; find'
.
(... censored/irrelevant stuff here)
./test-results
./test-results/test1.txt
then i get the "build teardown" saying it can't find test-results/test1.txt:
Build teardown
<1s
Searching for test report files in directories named
[test-results, failsafe-reports, test-reports, TestResults, surefire-reports] down to a depth of 4
Finished scanning for test reports. Found 0 test report files.
Merged test suites, total number tests is 0, with 0 failures and 0 errors.
i am surprised that it failed to find the ./test-results/test1.txt file.. hence the question.
Usually, each language/framework has some kind of utility to automatically produce such files as an outcome of a test suite run.
E.g. in Python you could simply run
pytest --junitxml=test-results/pytest.xml
See https://docs.pytest.org/en/latest/how-to/output.html#creating-junitxml-format-files
Manually crafting the xml youself feels brittle and tedious. Better find whatever library/option is available for your language/framework.
per https://support.atlassian.com/bitbucket-cloud/docs/test-reporting-in-pipelines/ seems it has to be XML files.. also it needs to be in a "j-unit xml format" ? an example of which can be found here https://www.ibm.com/docs/en/developer-for-zos/9.1.1?topic=formats-junit-xml-format
.. so try changing bitbucket-pipelines.yml to
options:
docker: true
pipelines:
default:
- step:
name: test bitbucket pipelines stuff..
script: # Modify the commands below to build your repository.
- export IMAGE_NAME2=easyad/easyad_nginx:$BITBUCKET_COMMIT
- /bin/bash bitbucket_pipeline_tests.sh
and in bitbucket_pipeline_tests.sh add
#!/bin/bash
mkdir test-results;
echo '<?xml version="1.0" encoding="UTF-8" ?>
<testsuites id="20140612_170519" name="New_configuration (14/06/12 17:05:19)" tests="225" failures="1262" time="0.001">
<testsuite id="codereview.cobol.analysisProvider" name="COBOL Code Review" tests="45" failures="17" time="0.001">
<testcase id="codereview.cobol.rules.ProgramIdRule" name="Use a program name that matches the source file name" time="0.001">
<failure message="PROGRAM.cbl:2 Use a program name that matches the source file name" type="WARNING">
WARNING: Use a program name that matches the source file name
Category: COBOL Code Review – Naming Conventions
File: /project/PROGRAM.cbl
Line: 2
</failure>
</testcase>
</testsuite>
</testsuites>' >>./test-results/test1.xml
then the pipeline run should say 17 / 45 tests failed, as indicated by the sample XML above...
I am trying to set the Build name/Build description based on the outcome of a shell script.
For example, I am executing the following lines in shell:
echo `date`
if [ $test == true ]then
echo "BuildSuccess"
else
echo "NoBuild"
fi
I am then running the build step "Changes build description". In this, I have added the macro:
${BUILD_LOG,maxLines=1}
After running the job I get the output as:
[SSH] executing...
Thu May 20 00:47:42 PDT 2021
BuildSuccess
[SSH] completed
[SSH] exit-status: 0
New run description is '[...truncated 478 B...]
'
Evaluated macro: '#37'
New run name is '#37'
Finished: SUCCESS
Can anyone help me understand why the macro is getting evaluated to [...truncated 478 B...]?
Is there a way I can capture the text "BuildSuccess" from the log?
I am in effect trying to capture the last line of the build log.
Please note that this is a freestyle project and not a pipeline.
I solved a similar issue by:
BUILD_LOG_REGEX
email-ext plugin
Here is a tutorial http://siddesh-bg.blogspot.com/2012/04/using-buildlogregex-in-jenkins-email.html .
By BUILD_LOG_REGEX the above mentioned plugin is able to query and extract parts of the build log.
I have a working shell script that works with no problems but when I executed it with a run script i get the following error:
The script will scan all the proto files in a directory and convert it to Swift using ProtoBuf. After that I will move the swift files into an App folder. The script code is the following
#!/bin/bash
for protoFile in ./*.proto;
do
protoc --swift_out=. $protoFile
done
for file in ./*.swift;
do
mv $file ../Convert\ AV/Model/USBDongle/Proto/
done
Any ideas?
Thank you
I was calling the script from the directory it was located. When Xcode executes the the script that assumption is false. So I am getting the directory where the script is located and I do the logic with that path
#!/bin/bash
#Get Directory where the script is located
baseDirectory=$(dirname "$0")
echo "$baseDirectory"
for protoFile in "$baseDirectory"/*.proto
do
echo $protoFile
protoc --swift_out="$baseDirectory" -I "$baseDirectory" "$protoFile"
done
for protoFileSwift in "$baseDirectory"/*.swift;
do
echo $protoFile
mv "$protoFileSwift" "$baseDirectory"/../Convert\ AV/Model/USBDongle/Proto
done
* /* Makes like is a comment... *
I am trying to set up a job in Jenkins using this p4 plugin. I successfully installed the plugin and set up the p4 configuration to point to my company's p4 executable.
Now the actual failure happens when I build the project. I am seeing the following:
Started by user anonymous
Building remotely on scspr0011492003.gdl.englab.netapp.com (scspr0011492003) in workspace /tmp/workspace/TestP4
Using remote perforce client: test--2000486220
[TestP4] $ /usr/software/rats/bin/p4 workspace -o test--2000486220
Last build changeset: 2464123
[TestP4] $ /usr/software/rats/bin/p4 changes -s submitted -m 1 //test--2000486220/...
[TestP4] $ /usr/software/rats/bin/p4 -s changes -s submitted //test--2000486220/...#2464124,#2515192
[TestP4] $ /usr/software/rats/bin/p4 describe -s 2515192
[TestP4] $ /usr/software/rats/bin/p4 -G where //...
[TestP4] $ /usr/software/rats/bin/p4 -s users alirezam
[TestP4] $ /usr/software/rats/bin/p4 user -o alirezam
Sync'ing workspace to changelist 2515192.
[TestP4] $ /usr/software/rats/bin/p4 -s sync //test--2000486220/...#2515192
Sync complete, took 1755 ms
[TestP4] $ /usr/software/rats/bin/p4 -xe /tmp/hudson6814857322401659205.sh
(b4p4: for help on the 'b4p4' wrapper, use 'p4 b4p4help'; p4 -V for version)
Perforce client error:
open for read: e: No such file or directory
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I am not sure what -xe is in p4 and not sure why it's failing. Can someone help? Thank you.
The issue here appears to be a combination of things. Wrong p4 command syntax and a possible non-valid file name.
The ‘-x’ global option in Perforce is for feeding a list of files as arguments to a command. For example, see the reference here:
http://www.perforce.com/perforce/doc.current/manuals/cmdref/global.options.html
I see a couple of things wrong with this command:
/usr/software/rats/bin/p4 -xe /tmp/hudson6814857322401659205.sh
1] Unless the ‘e’ is a file or file list it is not passing anything in
2] On that line, there is no Perforce command used with the global option -x. For example, a command such as ‘edit’ or ‘add’.
Should this line instead be perhaps like this?:
/usr/software/rats/bin/p4 -x /tmp/hudson6814857322401659205.sh edit
You could also try something like this here:
echo /tmp/hudson6814857322401659205.sh|p4 -x - edit
to see if you get the same error of “Perforce client error:
open for read: e: No such file or directory”? That error indicates that it might be returning something that is not a valid filename.
I am looking for functionality where we have a directory with some files in it.
Whenever any one makes a change in any of the files in the directory, jenkins shoukd trigger a build.
Is there any plugin or mathod for this functionality. Please advise.
Thanks in advance.
I have not tried it myself, but The FSTrigger plugin seems to do what you want:
FSTrigger provides polling mechanisms to monitor a file system and
trigger a build if a file or a set of files have changed.
If you can monitor the directory with a script, you can trigger the build with a HTTP GET, for example with wget or curl:
wget -O- $JENKINS_URL/job/JOBNAME/build
Although slightly related.. it seems like this issue was about monitoring static files on system.. however there are many version control systems for just this purpose.
I answered this in another post if you're using git to track changes on the files themselves:
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
You can add the check directly to the top of the job's exec shell, and it will exit 0 if no changes detected.. Hence, you can always poll the top level of the repo for check-in's to trigger a build. And only complete a build if the files in question change.