We've been running Jenkins 1.451 and 1.454 on Windows XP against a CVS repository for a few weeks now, without any problems. The CVS plugin (v1.6) was using the local cvsnt install.
We've since upgraded the CVS plugin to version 2.1 this morning and since then, CVS changes are not detected. The CVS polling log is triggered properly, tons of "cvs rlog" instructions are sent but at the end "No changes" is displayed.
Am I missing some configuration option somewhere?
Thanks.
Update 1: Looking into the entries file, I'm seeing incorrect times for recently updated files, the entry being 4 hours later than the actual change. Could this be related? I'm in the Eastern Time Zone (Montreal) with Daylight Saving Time in effect. The last cvs checkout command looked like this:
cvs checkout -P -r d-chg00014229_op_brc_preimp-op-2012-02-27 -D 23 Mar 2012 11:56:16 EDT -d portailInt portailInt
Update 2: The 4 hour difference corresponds to GMT-adjusted time, so it looks like there's a mixup in time zones somewhere. Using CVS plugin 1.6 the cvs polling command looked like this (executed at 5:26:21 PM EDT):
cvs -q -z3 -n update -PdC -r d-chg00014229_op_brc_preimp-op-2012-02-27 -D "Thursday, March 22, 2012 9:26:21 PM UTC"
Is it possible that the CVS server isn't properly interpreting the -D argument, either the parsing part or the time zone adjustment part?
Update 3: Behaviour is the same with CVS plugin 2.2
Update 4: Manual calls to "cvs rlog" do not return anything, while similar calls to "cvs log" return revision information for all module files.
cvs rlog -d"01 Mar 2012 09:26:21 -0400<27 Mar 2012 12:00:00 -0400" -S -rd-chg00014229_op_brc_preimp-op-2012-02-27 portailInt
cvs rlog: Logging portailInt
cvs log -d"01 Mar 2012 09:00:00 -0400<27 Mar 2012 12:00:00 -0400"
RCS file: /usr/local/cvs/repcvs/PortailInternetMouvement/portailInt/Portail/src/com/xxx/pvm/portail/taglib/I18nBundleTag.java,v
Working file: Portail/src/com/xxx/pvm/portail/taglib/I18nBundleTag.java
head: 1.3
branch:
locks: strict
access list:
symbolic names:
d-chg00014229_op_impl_2012-03-25_v06: 1.1.2.4
d-chg00014229_op_impl_2012-03-25_v05: 1.1.2.4
aq_op_2012-03-25_v04: 1.1.2.4
d-chg00014229_op_impl_2012-03-25_v04: 1.1.2.4
aq_op_2012-03-25_v03: 1.1.2.3
d-chg00014229_op_impl_2012-03-25_v03: 1.1.2.3
d-chg00014229_op_impl_2012-03-25_v02: 1.1.2.3
aq_op_2012-03-25_v01: 1.1
d-chg00014229_op_impl_2012-03-25_v01: 1.1
d-chg00014229_op_brc_preimp-op-2012-02-27: 1.1.0.2
preimp_op_2012-02-27: 1.1
keyword substitution: kv
total revisions: 8; selected revisions: 3
description:
----------------------------
revision 1.1.2.5
date: 2012/03/23 15:42:50; author: ba0chzi; state: Exp; lines: +4 -26
Organize imports
----------------------------
revision 1.1.2.4
date: 2012/03/13 14:18:27; author: ba0chmn; state: Exp; lines: +1 -1
Changement de scope de request ou session pour application dans le but d'améliorer les performances
----------------------------
revision 1.1.2.3
date: 2012/03/06 21:19:03; author: ba0chmn; state: Exp; lines: +14 -8
Utilisation des services de récupération de fichier dans UCM de xxx
Seems to be a bug. Documented here: https://issues.jenkins-ci.org/browse/JENKINS-13227
Related
My current Jenkins has a large number of jobs. Different folders, each with multiple jobs. I recently saw that one Jenkins slave(which is auto-scaled) is sending too many requests to another server at a particular time. However, I am unable to find which builds are run at that particular time without manually checking them. Is there any way to get this information using the API/Groovy script?
I wrote a very small bash script to run on the jenkins server to parse through the logfiles.
It is not sufficient to look only at the job start time. A job could have started just before your time window and even ended after your time window; it would still have run within your time window.
#!/bin/bash
start=$1
end=$2
for build_xml in jobs/*/branches/*/builds/*/build.xml
do
startTime=$(sed -n -e "s/.*<startTime>\(.*\)<\/startTime>.*/\1/p" $build_xml)
modificationTime=$(date '+%s' -r $build_xml)
if [[ $modificationTime > $start ]] && [[ $startTime < $end ]]
then
echo "START $(date -d #${startTime::-3}) END $(date -d #$modificationTime) : $build_xml"
fi
done
usage:
./getBuildsRunningBetween.sh 1565535639 1565582439
would give:
START Sun Aug 11 20:29:00 CEST 2019 END Sun Aug 11 20:30:20 CEST 2019 : jobs/job-name/branches/branch-name/builds/277/build.xml
Running jobs can be found based on color categorization called anime, using the below Jenkins API.
"Jenkins host url"/api/xml?tree=jobs[name,url,color]&xpath=/hudson/job[ends-with(color/text(),%22_anime%22)]&wrapper=jobs
Blue ones are the ones that are running
blue_anime
This can be easily achieved with a simple python script and Jenkins JSON REST API.
1.Prerequisites
Python 2.7 or 3.x and python requests library installed:
pip install requests
For python 3.x
pip3 install requests
Also: How to install pip
2.Python script to fetch builds between dates
import requests
from datetime import datetime
jenkins_url = "JENKINS_HOST"
username = "USERNAME"
password = "PASSWORD"
job_name = "JOB_NAME"
stop_date = datetime.strptime('23.11.2018 0:30:00', "%d.%m.%Y %H:%M:%S")
start_date = datetime.strptime('10.11.2018 18:46:04', "%d.%m.%Y %H:%M:%S")
request_url = "{0:s}/job/{1:s}/api/json{2:s}".format(
jenkins_url,
job_name,
"?tree=builds[fullDisplayName,id,number,timestamp,url]"
)
response = requests.get(request_url, auth=(username, password)).json()
builds = []
for build in response['builds']:
build_date = datetime.utcfromtimestamp(build['timestamp']/1000)
if build_date >= start_date and build_date <= stop_date:
builds.append(build)
print("Job name: {0:s}".format(build["fullDisplayName"]))
print("Build number: {0:d}".format(build["number"]))
print("Build url: {0:s}".format(build["url"]))
print("Build timestamp: {0:d}".format(build["timestamp"]))
print("Build date: {}\n".format(build_date))
Above script works both with python 2.7 and 3.x. Now a little explanation:
First download all builds data by using JSON API and load response as JSON. Then for each build convert its timestamp to date time and compare with start and stop dates. Please note its important to divide timestamp by 1000 to get seconds not milliseconds (otherwise date conversion from timestamp will raise a ValueError).
Example output:
$ python test.py
Job name: Dummy #21
Build number: 21
Build url: http://localhost:8080/job/Dummy/21/
Build timestamp: 1541875585881
Build date: 2018-11-10 18:46:25
Job name: Dummy #20
Build number: 20
Build url: http://localhost:8080/job/Dummy/20/
Build timestamp: 1541875564250
Build date: 2018-11-10 18:46:04
On the other hand, if you want to provide start and stop dates in a different format then remember you'll need to adjust format parameter it in strptime() function. Python datetime directives.
Few examples:
datetime.strptime("23.11.2018", "%d.%m.%Y")
datetime.strptime("2018.11.23", "%Y.%m.%d")
datetime.strptime("Jun 1 2005 1:33PM", "%b %d %Y %I:%M%p")
3.Python script to fetch build on exact date
If you are interested in finding build by its exact date just replace this line:
if build_date >= start_date and build_date <= stop_date:
with this:
if build_date == date:
where date is particular build date.
Please note! that if you want to find build by exact date you'll need to provide date with the proper format. In this case it's "%d.%m.%Y %H:%M:%S". Example:
datetime.strptime('10.11.2018 18:46:04', "%d.%m.%Y %H:%M:%S")
Otherwise, even though dates will the same to a certain point (minutes, hours, date etc.) for python it won't be equal dates.
If you want to provide another format then you'll need to adjust it for build_date variable.
build_date.strftime('%d %m,%Y')
I have the following in ./bash_profile
export SRCROOT=/users/benjamin.beasley/work/svn/ccdev
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk_dev/Contents/Home
export PATH=$PATH$:~/tools/tools-versions/gradle-2.2.1/bin
export PATH=$PATH$:~/tools/activator
In ~/tools/tools-versions/gradle-1.12/bin, I do
drwxr-xr-x# 4 xxx.xxx WORKDAYINTERNAL\Domain Users 136 Nov 12 11:47 .
drwxr-xr-x# 13 xxx.xxx WORKDAYINTERNAL\Domain Users 442 Apr 29 2014 ..
-rwxr-xr-x# 1 xxx.xxx WORKDAYINTERNAL\Domain Users 5071 Apr 29 2014 gradle
-rwxr-xr-x# 1 xxx.xxx WORKDAYINTERNAL\Domain Users 2395 Apr 29 2014 gradle.bat
echo $PATH$:
/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin$:/Users/xxx.xxx/tools/tools-versions/gradle-2.2.1/bin$:/Users/xxx.xxx/tools/activator90566
so I get the gradle executable which is executable. I can execute it from this directory. But if I start a new shell, and type "gradle" it says command not found. But I "echo $PATH$" and I see that the full canonical path to the ~/tools/tools-versions/gradle-2.2.1/bin folder is there.
However I can execute activator which is an executable in the ~/tools/activator directory. I have no clue why bash knows about activator and not gradle.
In summary:
gradle is executable by this user
gradle can be run from the command line.
gradle is in the $PATH$ environment variable
other programs such as activator, which are also in $Path$ are executable anywhere in terminal regardless of directory which is what I want to be true of gradle.
Unix environment variables are $PATH not $PATH$ (they aren't like Windows env vars).
This is causing your problem.
This path is busted: /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin$:/Users/xxx.xxx/tools/tools-versions/gradle-2.2.1/bin$:/Users/xxx.xxx/tools/activator90566
Notice the 90566 at the end? That's from $$ having been expanded to the current process id when you set the variable.
None of these are paths that actually exist or work:
/opt/X11/bin$
/Users/xxx.xxx/tools/tools-versions/gradle-2.2.1/bin$
/Users/xxx.xxx/tools/activator90566
i want to monitor hard-drive's health of my windows server, for this i have installed Smarttools(smartmontools-6.1-2.win32-setup.exe).
My question is, how can i display commands output on Nagios-Server via nrpe or somewhat else.
Some info: Nagios-Core-3.5, smartmontools-6.1-2,
Commands output on windows machine:
c:> smartctl.exe /dev/sda -l selftest
smartctl 6.1 2013-03-16 r3800 [i686-w64-mingw32-xp-sp2] (sf-6.1-2)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 17592 -
# 2 Extended offline Completed without error 00% 17393 -
# 3 Short offline Completed without error 00% 17392 -
c:> smartctl.exe /dev/sda -H
smartctl 6.1 2013-03-16 r3800 [i686-w64-mingw32-xp-sp2] (sf-6.1-2)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
C:>smartctl -d ata /dev/sda -i
smartctl 6.1 2013-03-16 r3800 [i686-w64-mingw32-xp-sp2] (sf-6.1-2)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.9
Device Model: ST3802110A
Serial Number: 5LR7M728
Firmware Version: 3.AAJ
User Capacity: 80,026,361,856 bytes [80.0 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA/ATAPI-7 (minor revision not indicated)
Local Time is: Fri Jun 07 19:02:13 2013 IST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Any help would greatly appreciated.
You have two issues.
You need to be able to get Nagios to run a check remotely on your Windows server, and
You need to be able to get the data into a Nagios-compatible format.
For the first, you can probably install an agent such as NC_Net or NSClient++. This can be queried using either check_nt or check_nrpe. I would recommend using NC_Net.
For the second, you will likely have to write your own script to run the command and output in Nagios plugin format (one line of text, and an exit status of 0/1/2/3 for OK/Warn/Crit/Unknown). This script can be remotely called via check_nrpe.
However, if your goal is simply to monitor disk space, you can do that using the standard check functions builtin to NC_Net or NSClient++
You may find pre-written scripts at monitoringexchange.org , such as this
I want this folder structure to be zipped as it is-
Right now I am doing these steps-
Moving files other than folders to "Temp" folder.
Moving Folder1 to "Temp" folder.
Zipping Temp.
Deleting "Temp folder".
Is this the correct approach or is there any simple/better way of doing this?
There is no need to move the files before zipping them.
<zip destfile="test.zip" basedir="src_dir" includes="**/*"/>
If you had a more challenging selection to make than **/*, e.g. include and exclude specific files selectively, then you could achieve that with one or more filesets (or zipfilesets) within the zip element.
Here is the dir structure, including build.xml:
$ find .
.
./build.xml
./src_dir
./src_dir/Document.txt
./src_dir/Document2.txt
./src_dir/Document3.xml
./src_dir/Folder1
Here is the zip file created:
$ jar tvf test.zip
0 Fri Jul 20 08:36:06 GMT 2012 Folder1/
0 Fri Jul 20 08:36:26 GMT 2012 Document.txt
0 Fri Jul 20 08:36:12 GMT 2012 Document2.txt
0 Fri Jul 20 08:36:18 GMT 2012 Document3.xml
When I explicitly state in my Production.rb that I want UTC with:
# Timezone Set
config.time_zone = 'UTC'
Then I log into my production machine and run the production console (machine defaults to +4 Mowscow Time).. I get the following output:
$ Time.now
=> 2012-02-04 20:52:32 +0400
$ Time.zone.now
=> Sat, 04 Feb 2012 16:52:43 UTC +00:00
The value of Time_ago_in_words is always +4 .. if I post something 'now'.. it shows as '4 hours'.... that counts down to 0 after 4 hours.. then starts being negative (i.e. 1 hour ago).... How do I get this to display correctly?
=================
After a too many hours trying to figure out how to do this.. I finally figured out just after posting this :-| I had to run:
sudo dpkg-reconfigure tzdata
And set my timezone to UTC there.. now Time.now outputs the UTC instead of Moscow.. I'll leave this here for anyone else that runs into this.
As suggested by those commenting, here is the solution that allowed me to get this to work:
sudo dpkg-reconfigure tzdata
And set my timezone to UTC there.. now Time.now outputs the UTC instead of Moscow.