FailedToParse: Date expecting integer milliseconds - mongoimport

I have json file with below content :
laks#giis:/home/ubuntu# cat /tmp/db1.json
{ "_id" : { "$oid" : "54cf54e57f7cfa64c908ebd2" }, "tid" : 1, "__v" : 0 }
and it imports properly:
laks#giis:/home/ubuntu# mongoimport -d test -c tutorials --file /tmp/db1.json
connected to: 127.0.0.1
Tue May 5 03:44:25.471 imported 1 objects
but when I added date field to the same db.json file it fails:
laks#giis:/home/ubuntu# cat /tmp/db1.json
{ "_id" : { "$oid" : "54cf54e57f7cfa64c908ebd2" }, "tid" : 1, "__v" : 0,"time" : { "$date" : "2015-02-01T22:09:31.475-0500" } }
laks#giis:/home/ubuntu# mongoimport -d test -c tutorials --file /tmp/db1.json
connected to: 127.0.0.1
Tue May 5 03:45:17.729 exception:BSON representation of supplied JSON is too large: code FailedToParse: FailedToParse: Date expecting integer milliseconds: offset:92
Tue May 5 03:45:17.729
Tue May 5 03:45:17.729 check 0 0
Tue May 5 03:45:17.729 imported 0 objects
Tue May 5 03:45:17.729 ERROR: encountered 1 error(s)
Other solutions like add "--jsonArray" in the cli , didn't help.

What version of mongoimport are you using (as reported by mongoimport --version)? Your test JSON imports fine for me using mongoimport 2.6.9 and 3.0.2. Based on the error message ("Date expecting integer milliseconds"), it looks like you are using an older version of mongoimport that only supports milliseconds. You can either upgrade to a newer production release of MongoDB, or format your $date strings in milliseconds as required.

Related

DATE_FORMAT="%s" cannot post timestamp data into datetime field

I would like to use timestamp for datetime field type, so I changed DATE_FORMAT for "%s":
settings.py:
DATE_FORMAT="%s"
...
'when': {
'type': 'datetime'
},
...
The format is now a valid timestamp when doing a GET on some data :
$ curl http://192.168.3.42:5001/stock
...
"when": "1551083317",
...
BUT I cannot insert new data, datetime are not accepted :
$ curl -d '{"when": "1555543177"}' -H 'Content-Type: application/json' http://192.168.3.42:5001/stock
{"_status": "ERR", "_issues": {"when": "must be of datetime type"}, "_error": {"code": 422, "message": "Insertion failure: 1 document(s) contain(s) error(s)"}}
I tried without double-quote :
curl -d '{"when": 1555543177}...
same result.
Different formats for DATE_FORMAT are OK except for "%s" (timestamp)
any idea ?
$ pip list
Package Version
Cerberus 1.2
Eve 0.8.1
Flask 1.0.2
pymongo 3.7.2

How to get a list of Jenkins builds that ran at a particular time?

My current Jenkins has a large number of jobs. Different folders, each with multiple jobs. I recently saw that one Jenkins slave(which is auto-scaled) is sending too many requests to another server at a particular time. However, I am unable to find which builds are run at that particular time without manually checking them. Is there any way to get this information using the API/Groovy script?
I wrote a very small bash script to run on the jenkins server to parse through the logfiles.
It is not sufficient to look only at the job start time. A job could have started just before your time window and even ended after your time window; it would still have run within your time window.
#!/bin/bash
start=$1
end=$2
for build_xml in jobs/*/branches/*/builds/*/build.xml
do
startTime=$(sed -n -e "s/.*<startTime>\(.*\)<\/startTime>.*/\1/p" $build_xml)
modificationTime=$(date '+%s' -r $build_xml)
if [[ $modificationTime > $start ]] && [[ $startTime < $end ]]
then
echo "START $(date -d #${startTime::-3}) END $(date -d #$modificationTime) : $build_xml"
fi
done
usage:
./getBuildsRunningBetween.sh 1565535639 1565582439
would give:
START Sun Aug 11 20:29:00 CEST 2019 END Sun Aug 11 20:30:20 CEST 2019 : jobs/job-name/branches/branch-name/builds/277/build.xml
Running jobs can be found based on color categorization called anime, using the below Jenkins API.
"Jenkins host url"/api/xml?tree=jobs[name,url,color]&xpath=/hudson/job[ends-with(color/text(),%22_anime%22)]&wrapper=jobs
Blue ones are the ones that are running
blue_anime
This can be easily achieved with a simple python script and Jenkins JSON REST API.
1.Prerequisites
Python 2.7 or 3.x and python requests library installed:
pip install requests
For python 3.x
pip3 install requests
Also: How to install pip
2.Python script to fetch builds between dates
import requests
from datetime import datetime
jenkins_url = "JENKINS_HOST"
username = "USERNAME"
password = "PASSWORD"
job_name = "JOB_NAME"
stop_date = datetime.strptime('23.11.2018 0:30:00', "%d.%m.%Y %H:%M:%S")
start_date = datetime.strptime('10.11.2018 18:46:04', "%d.%m.%Y %H:%M:%S")
request_url = "{0:s}/job/{1:s}/api/json{2:s}".format(
jenkins_url,
job_name,
"?tree=builds[fullDisplayName,id,number,timestamp,url]"
)
response = requests.get(request_url, auth=(username, password)).json()
builds = []
for build in response['builds']:
build_date = datetime.utcfromtimestamp(build['timestamp']/1000)
if build_date >= start_date and build_date <= stop_date:
builds.append(build)
print("Job name: {0:s}".format(build["fullDisplayName"]))
print("Build number: {0:d}".format(build["number"]))
print("Build url: {0:s}".format(build["url"]))
print("Build timestamp: {0:d}".format(build["timestamp"]))
print("Build date: {}\n".format(build_date))
Above script works both with python 2.7 and 3.x. Now a little explanation:
First download all builds data by using JSON API and load response as JSON. Then for each build convert its timestamp to date time and compare with start and stop dates. Please note its important to divide timestamp by 1000 to get seconds not milliseconds (otherwise date conversion from timestamp will raise a ValueError).
Example output:
$ python test.py
Job name: Dummy #21
Build number: 21
Build url: http://localhost:8080/job/Dummy/21/
Build timestamp: 1541875585881
Build date: 2018-11-10 18:46:25
Job name: Dummy #20
Build number: 20
Build url: http://localhost:8080/job/Dummy/20/
Build timestamp: 1541875564250
Build date: 2018-11-10 18:46:04
On the other hand, if you want to provide start and stop dates in a different format then remember you'll need to adjust format parameter it in strptime() function. Python datetime directives.
Few examples:
datetime.strptime("23.11.2018", "%d.%m.%Y")
datetime.strptime("2018.11.23", "%Y.%m.%d")
datetime.strptime("Jun 1 2005 1:33PM", "%b %d %Y %I:%M%p")
3.Python script to fetch build on exact date
If you are interested in finding build by its exact date just replace this line:
if build_date >= start_date and build_date <= stop_date:
with this:
if build_date == date:
where date is particular build date.
Please note! that if you want to find build by exact date you'll need to provide date with the proper format. In this case it's "%d.%m.%Y %H:%M:%S". Example:
datetime.strptime('10.11.2018 18:46:04', "%d.%m.%Y %H:%M:%S")
Otherwise, even though dates will the same to a certain point (minutes, hours, date etc.) for python it won't be equal dates.
If you want to provide another format then you'll need to adjust it for build_date variable.
build_date.strftime('%d %m,%Y')

Grep from text file over the past hour

I have several commands similar to:
ping -i 60 8.8.8.8 | while read pong; do echo "$(date): $pong" >> /security/latencytracking/pingcapturetest2.txt; done
output:
Tue Feb 4 15:13:39 EST 2014: 64 bytes from 8.8.8.8: icmp_seq=0 ttl=50
time=88.844 ms
I then search the results using:
cat /security/latencytracking/pingcapturetest* | egrep 'time=........ ms|time=......... ms'
I am looking for latency anomalies over X ms.
Is there a way to search better than I am doing and search over the past 1,2,3, etc. hours as opposed to from the start of the file? This could get tedious over time.
You could add unix timestamp to your log, and then search based on that:
ping -i 60 8.8.8.8 | while read pong; do
echo "$(date +"%s"): $pong" >> log.txt
done
Your log will have entries like:
1391548048: 64 bytes from 8.8.8.8: icmp_req=1 ttl=47 time=20.0 ms
Then search with a combination of date and awk:
Using GNU Date (Linux etc):
awk -F: "\$1 > $(date -d '1 hour ago' +'%s')" log.txt
or BSD Date (Mac OSX, BSD)
awk -F: "\$1 > $(date -j -v '-1H' +%s)" log.txt
The command uses date -d to translate english time-sentence (or date -v for the same task on BSD/OSX) to unix timestamp. awk then compares the logged timestamp (first field before the :) with the generated timestamp and prints all log-lines which have a higher value, ie newer.
If you are familiar with R:
1. I'd slurp the whole thing in with read.table(), drop the unnecessary columns
2. then do whatever calculations you like
Unless you have tens of millions of records, then R might be a bit slow.
Plan B:
1. use cut to nuke anything you dont need and then goto the plan above.
You can also do it with bash. You can compare dates, as follows:
Crop the date field. You can convert that date into the number of seconds since midnight of 1st Jan 1970
date -d "Tue Feb 4 15:13:39 EST 2014" '+%s'
you compare that number against the number of seconds you got one hour ago,
reference=$(date --date='-1 hour' '+%s')
This way you get all records from last hour. Then you can filter after the length of the delay

jtl file is not getting parsed in jenkins for jmeter

I am trying to run the jmeter test from Jenkins. I've already installed performance plugin and restarted the jenkins. I don't want to use any maven/ant.
Execute shell command
cd /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/workspace
java -jar /Users/Shared/Jenkins/apache-jmeter/bin/ApacheJMeter.jar -n -t Login_Logout.jmx -l result.jtl
In the post build actions of jenkin-> publish performance test result report -> jmeter -> report files -> **/*.jtl
While I am running from jenkin the console says
Performance: Failed to parse /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/builds/2013-10-03_17-14-53/performance-reports/JMeter/result.jtl: Content is not allowed in prolog.
So I am not able view the result/report in the performance Report section. Any suggestion how to fix.
==================================console output=============
+ cd /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/workspace
+ java -jar /Users/Shared/Jenkins/apache-jmeter/bin/ApacheJMeter.jar -n -t Login_Logout.jmx -l result.jtl
Creating summariser <summary>
Created the tree successfully using Login_Logout.jmx
Starting the test # Thu Oct 03 17:14:55 BST 2013 (1380816895721)
Waiting for possible shutdown message on port 4445
summary + 2 in 4.1s = 0.5/s Avg: 2013 Min: 766 Max: 3260 Err: 0 (0.00%) Active: 1 Started: 1 Finished: 0
summary + 10 in 4s = 2.5/s Avg: 392 Min: 286 Max: 573 Err: 0 (0.00%) Active: 0 Started: 1 Finished: 1
summary = 12 in 8s = 1.5/s Avg: 662 Min: 286 Max: 3260 Err: 0 (0.00%)
Tidying up ... # Thu Oct 03 17:15:04 BST 2013 (1380816904307)
... end of run
Performance: Percentage of errors greater or equal than 0% sets the build as unstable
Performance: Percentage of errors greater or equal than 0% sets the build as failure
Performance: Recording JMeter reports '**/*.jtl'
Performance: Parsing JMeter report file result.jtl
Performance: Failed to parse /Users/Shared/Jenkins/Home/jobs/meineTui-QA-Test-Jmeter/builds /2013-10-03_17-14-53/performance-reports/JMeter/result.jtl: Content is not allowed in prolog.
Finished: SUCCESS
result.jtl
1380816896268,766,Login,200,OK,Group1 1-1,text,true,230,766
1380816897071,3260,Reservations,200,OK,Group1 1-1,text,true,3295,3260
1380816900339,335,ReservationID,200,OK,Group1 1-1,text,true,8683,335
1380816900681,353,Weather,200,OK,Group1 1-1,text,true,2022,353
1380816901039,563,Summary,200,OK,Group1 1-1,text,true,6528,563
1380816901607,573,Home,200,OK,Group1 1-1,text,true,11955,573
1380816902187,329,HolidayCountdown,200,OK,Group1 1-1,text,true,344,329
1380816902520,375,Contacts,200,OK,Group1 1-1,text,true,2835,375
1380816902899,286,Excursions,200,OK,Group1 1-1,text,true,237,286
1380816903189,361,TravelAgent,200,OK,Group1 1-1,text,true,570,361
1380816903554,319,Profile,200,OK,Group1 1-1,text,true,395,319
make the following changes in the jmeter.properties file:
remove the comment from the following line and change csv by xml
#jmeter.save.saveservice.output_format=csv
like this:
jmeter.save.saveservice.output_format=xml
remove the (#) comment from the following lines:
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
And change the extension of the file to be generate jtl to xml.
With latest versions of Jenkins Performance plugin (e.g. v1.14) you can parse both CSV and XML formats.
Depending on the format of your result files, you need to select the appropriate report type in the "Publish performance tests result report" section:
chose "JMeter" report type if your result files are XML
chose "JMeterCSV" report type if your result files are CSV.

Jenkins CVS plugin does not detect changes

We've been running Jenkins 1.451 and 1.454 on Windows XP against a CVS repository for a few weeks now, without any problems. The CVS plugin (v1.6) was using the local cvsnt install.
We've since upgraded the CVS plugin to version 2.1 this morning and since then, CVS changes are not detected. The CVS polling log is triggered properly, tons of "cvs rlog" instructions are sent but at the end "No changes" is displayed.
Am I missing some configuration option somewhere?
Thanks.
Update 1: Looking into the entries file, I'm seeing incorrect times for recently updated files, the entry being 4 hours later than the actual change. Could this be related? I'm in the Eastern Time Zone (Montreal) with Daylight Saving Time in effect. The last cvs checkout command looked like this:
cvs checkout -P -r d-chg00014229_op_brc_preimp-op-2012-02-27 -D 23 Mar 2012 11:56:16 EDT -d portailInt portailInt
Update 2: The 4 hour difference corresponds to GMT-adjusted time, so it looks like there's a mixup in time zones somewhere. Using CVS plugin 1.6 the cvs polling command looked like this (executed at 5:26:21 PM EDT):
cvs -q -z3 -n update -PdC -r d-chg00014229_op_brc_preimp-op-2012-02-27 -D "Thursday, March 22, 2012 9:26:21 PM UTC"
Is it possible that the CVS server isn't properly interpreting the -D argument, either the parsing part or the time zone adjustment part?
Update 3: Behaviour is the same with CVS plugin 2.2
Update 4: Manual calls to "cvs rlog" do not return anything, while similar calls to "cvs log" return revision information for all module files.
cvs rlog -d"01 Mar 2012 09:26:21 -0400<27 Mar 2012 12:00:00 -0400" -S -rd-chg00014229_op_brc_preimp-op-2012-02-27 portailInt
cvs rlog: Logging portailInt
cvs log -d"01 Mar 2012 09:00:00 -0400<27 Mar 2012 12:00:00 -0400"
RCS file: /usr/local/cvs/repcvs/PortailInternetMouvement/portailInt/Portail/src/com/xxx/pvm/portail/taglib/I18nBundleTag.java,v
Working file: Portail/src/com/xxx/pvm/portail/taglib/I18nBundleTag.java
head: 1.3
branch:
locks: strict
access list:
symbolic names:
d-chg00014229_op_impl_2012-03-25_v06: 1.1.2.4
d-chg00014229_op_impl_2012-03-25_v05: 1.1.2.4
aq_op_2012-03-25_v04: 1.1.2.4
d-chg00014229_op_impl_2012-03-25_v04: 1.1.2.4
aq_op_2012-03-25_v03: 1.1.2.3
d-chg00014229_op_impl_2012-03-25_v03: 1.1.2.3
d-chg00014229_op_impl_2012-03-25_v02: 1.1.2.3
aq_op_2012-03-25_v01: 1.1
d-chg00014229_op_impl_2012-03-25_v01: 1.1
d-chg00014229_op_brc_preimp-op-2012-02-27: 1.1.0.2
preimp_op_2012-02-27: 1.1
keyword substitution: kv
total revisions: 8; selected revisions: 3
description:
----------------------------
revision 1.1.2.5
date: 2012/03/23 15:42:50; author: ba0chzi; state: Exp; lines: +4 -26
Organize imports
----------------------------
revision 1.1.2.4
date: 2012/03/13 14:18:27; author: ba0chmn; state: Exp; lines: +1 -1
Changement de scope de request ou session pour application dans le but d'améliorer les performances
----------------------------
revision 1.1.2.3
date: 2012/03/06 21:19:03; author: ba0chmn; state: Exp; lines: +14 -8
Utilisation des services de récupération de fichier dans UCM de xxx
Seems to be a bug. Documented here: https://issues.jenkins-ci.org/browse/JENKINS-13227

Resources