How to make time_ago_in_words show right value? - ruby-on-rails

When I explicitly state in my Production.rb that I want UTC with:
# Timezone Set
config.time_zone = 'UTC'
Then I log into my production machine and run the production console (machine defaults to +4 Mowscow Time).. I get the following output:
$ Time.now
=> 2012-02-04 20:52:32 +0400
$ Time.zone.now
=> Sat, 04 Feb 2012 16:52:43 UTC +00:00
The value of Time_ago_in_words is always +4 .. if I post something 'now'.. it shows as '4 hours'.... that counts down to 0 after 4 hours.. then starts being negative (i.e. 1 hour ago).... How do I get this to display correctly?
=================
After a too many hours trying to figure out how to do this.. I finally figured out just after posting this :-| I had to run:
sudo dpkg-reconfigure tzdata
And set my timezone to UTC there.. now Time.now outputs the UTC instead of Moscow.. I'll leave this here for anyone else that runs into this.

As suggested by those commenting, here is the solution that allowed me to get this to work:
sudo dpkg-reconfigure tzdata
And set my timezone to UTC there.. now Time.now outputs the UTC instead of Moscow.

Related

How to get a list of Jenkins builds that ran at a particular time?

My current Jenkins has a large number of jobs. Different folders, each with multiple jobs. I recently saw that one Jenkins slave(which is auto-scaled) is sending too many requests to another server at a particular time. However, I am unable to find which builds are run at that particular time without manually checking them. Is there any way to get this information using the API/Groovy script?
I wrote a very small bash script to run on the jenkins server to parse through the logfiles.
It is not sufficient to look only at the job start time. A job could have started just before your time window and even ended after your time window; it would still have run within your time window.
#!/bin/bash
start=$1
end=$2
for build_xml in jobs/*/branches/*/builds/*/build.xml
do
startTime=$(sed -n -e "s/.*<startTime>\(.*\)<\/startTime>.*/\1/p" $build_xml)
modificationTime=$(date '+%s' -r $build_xml)
if [[ $modificationTime > $start ]] && [[ $startTime < $end ]]
then
echo "START $(date -d #${startTime::-3}) END $(date -d #$modificationTime) : $build_xml"
fi
done
usage:
./getBuildsRunningBetween.sh 1565535639 1565582439
would give:
START Sun Aug 11 20:29:00 CEST 2019 END Sun Aug 11 20:30:20 CEST 2019 : jobs/job-name/branches/branch-name/builds/277/build.xml
Running jobs can be found based on color categorization called anime, using the below Jenkins API.
"Jenkins host url"/api/xml?tree=jobs[name,url,color]&xpath=/hudson/job[ends-with(color/text(),%22_anime%22)]&wrapper=jobs
Blue ones are the ones that are running
blue_anime
This can be easily achieved with a simple python script and Jenkins JSON REST API.
1.Prerequisites
Python 2.7 or 3.x and python requests library installed:
pip install requests
For python 3.x
pip3 install requests
Also: How to install pip
2.Python script to fetch builds between dates
import requests
from datetime import datetime
jenkins_url = "JENKINS_HOST"
username = "USERNAME"
password = "PASSWORD"
job_name = "JOB_NAME"
stop_date = datetime.strptime('23.11.2018 0:30:00', "%d.%m.%Y %H:%M:%S")
start_date = datetime.strptime('10.11.2018 18:46:04', "%d.%m.%Y %H:%M:%S")
request_url = "{0:s}/job/{1:s}/api/json{2:s}".format(
jenkins_url,
job_name,
"?tree=builds[fullDisplayName,id,number,timestamp,url]"
)
response = requests.get(request_url, auth=(username, password)).json()
builds = []
for build in response['builds']:
build_date = datetime.utcfromtimestamp(build['timestamp']/1000)
if build_date >= start_date and build_date <= stop_date:
builds.append(build)
print("Job name: {0:s}".format(build["fullDisplayName"]))
print("Build number: {0:d}".format(build["number"]))
print("Build url: {0:s}".format(build["url"]))
print("Build timestamp: {0:d}".format(build["timestamp"]))
print("Build date: {}\n".format(build_date))
Above script works both with python 2.7 and 3.x. Now a little explanation:
First download all builds data by using JSON API and load response as JSON. Then for each build convert its timestamp to date time and compare with start and stop dates. Please note its important to divide timestamp by 1000 to get seconds not milliseconds (otherwise date conversion from timestamp will raise a ValueError).
Example output:
$ python test.py
Job name: Dummy #21
Build number: 21
Build url: http://localhost:8080/job/Dummy/21/
Build timestamp: 1541875585881
Build date: 2018-11-10 18:46:25
Job name: Dummy #20
Build number: 20
Build url: http://localhost:8080/job/Dummy/20/
Build timestamp: 1541875564250
Build date: 2018-11-10 18:46:04
On the other hand, if you want to provide start and stop dates in a different format then remember you'll need to adjust format parameter it in strptime() function. Python datetime directives.
Few examples:
datetime.strptime("23.11.2018", "%d.%m.%Y")
datetime.strptime("2018.11.23", "%Y.%m.%d")
datetime.strptime("Jun 1 2005 1:33PM", "%b %d %Y %I:%M%p")
3.Python script to fetch build on exact date
If you are interested in finding build by its exact date just replace this line:
if build_date >= start_date and build_date <= stop_date:
with this:
if build_date == date:
where date is particular build date.
Please note! that if you want to find build by exact date you'll need to provide date with the proper format. In this case it's "%d.%m.%Y %H:%M:%S". Example:
datetime.strptime('10.11.2018 18:46:04', "%d.%m.%Y %H:%M:%S")
Otherwise, even though dates will the same to a certain point (minutes, hours, date etc.) for python it won't be equal dates.
If you want to provide another format then you'll need to adjust it for build_date variable.
build_date.strftime('%d %m,%Y')

How to save image from url to local ubuntu folder from rails console?

I need to write image parser from some website which will take images, some other info and save it to my local folder.
So let's say we have image at this url :
https://i.stack.imgur.com/MiqEv.jpg
(this is someone's SO avatar)
So I want to to save it to local folder. Let's say to "~/test/image.png"
I found this link
And I tried this in my terminal:
rails console
require 'open-uri'
open('~/test/image.jpg', 'wb') do
|file| file << open('https://i.stack.imgur.com/MiqEv.jpg').read
end
As you can see my home/test folder is empty
And I got this output from console
#<File:~/test/image.jpg (closed)>
What do I do?
Also I tried this:
require 'open-uri'
download = open('https://i.stack.imgur.com/MiqEv.jpg')
IO.copy_stream(download, '~/test/image.jpg')
And got this output:
=> #https://i.stack.imgur.com/MiqEv.jpg>, #meta={"date"=>"Fri, 06 May 2016
11:58:05 GMT", "content-type"=>"image/jpeg", "content-length"=>"4276",
"connection"=>"keep-alive",
"set-cookie"=>"__cfduid=d7f982c0742bf40e58d626659c65a88841462535885;
expires=Sat, 06-May-17 11:58:05 GMT; path=/; domain=.imgur.com;
HttpOnly", "cache-control"=>"public, max-age=315360000",
"etag"=>"\"b75caf18a116034fc3541978de7bac5b\"", "expires"=>"Mon, 04
May 2026 11:58:05 GMT", "last-modified"=>"Thu, 28 Mar 2013 15:05:35
GMT", "x-amz-version-id"=>"TP7cpPcf0jWeW2t1gUz66VXYlevddAYh",
"cf-cache-status"=>"HIT", "vary"=>"Accept-Encoding",
"server"=>"cloudflare-nginx", "cf-ray"=>"29ec4221fdbf267e-FRA"},
#metas={"date"=>["Fri, 06 May 2016 11:58:05 GMT"],
"content-type"=>["image/jpeg"], "content-length"=>["4276"],
"connection"=>["keep-alive"],
"set-cookie"=>["__cfduid=d7f982c0742bf40e58d626659c65a88841462535885;
expires=Sat, 06-May-17 11:58:05 GMT; path=/; domain=.imgur.com;
HttpOnly"], "cache-control"=>["public, max-age=315360000"],
"etag"=>["\"b75caf18a116034fc3541978de7bac5b\""], "expires"=>["Mon, 04
May 2026 11:58:05 GMT"], "last-modified"=>["Thu, 28 Mar 2013 15:05:35
GMT"], "x-amz-version-id"=>["TP7cpPcf0jWeW2t1gUz66VXYlevddAYh"],
"cf-cache-status"=>["HIT"], "vary"=>["Accept-Encoding"],
"server"=>["cloudflare-nginx"], "cf-ray"=>["29ec4221fdbf267e-FRA"]},
#status=["200", "OK"]>
2.3.0 :244 > IO.copy_stream(download, '~/test/image.jpg') => 4276
But my folder is still empty.
What do I do??
The problem is that the file is not getting created. If you create the file using File.open or open and then execute the `IO.copy_stream' it will work.
Also ~/ doesn't work in ruby. You have to specify the whole path.
require 'open-uri'
download = open('https://i.stack.imgur.com/MiqEv.jpg')
open('/home/user/image.jpg', 'w')
IO.copy_stream(download, '~/test/image.jpg')
If you want a directory to be created as well, you will have to user Dir.mkdir. If you want to create nested directories use FileUtils::mkdir_p. If it is difficult to use either, I would suggest using system 'mkdir dirname' or system 'mkdir -p dir1/dir2/dir3'
Dir.mkdir '/home/user/test' # doesnt work for nested folder creation
require 'fileutils'
FileUtils::mkdir_p '/home/user/test1/test2' # for nested
system 'mkdir '~/test' # Unix command for directory creation
system 'mkdir -p '~/test1/test2' # Unix command for nested directory
Hope this helps
If you are using Ubuntu, could you just use the wget?
You can use both wget 'https://i.stack.imgur.com/MiqEv.jpg' and system("wget 'https://i.stack.imgur.com/MiqEv.jpg'"). Or system("wget 'https://i.stack.imgur.com/MiqEv.jpg' > /your/path
Note: for the first command you need to wrap you command into the ` signs. This will cause ruby to call a system command.
Also, consider using /home/your_name instead of just ~. Also notice the leading / slash.

Disable transparent huge pages in TokuMX

When I try to start the mongod server by giving ./mongod --dbpath /nlu/ind/mongodata
I get an error saying
TokuMX will not run with transparent huge pages enabled.
Tue Aug 20 10:47:34 [initandlisten] Please disable them to continue.
Tue Aug 20 10:47:34 [initandlisten] (echo never > /sys/kernel/mm/transparent_hugepage/enabled)
Tue Aug 20 10:47:34 [initandlisten]
Tue Aug 20 10:47:34 [initandlisten] The assertion failure you are about to see is intentional
Please let me know how to sort this. Thanks in advance for your help.
I have 3 directories in /sys/kernel/mm/ and they are hugepages ksm redhat_transparent_hugepage
As the message states, you need to do the following:
sudo bash -c 'echo never > /sys/kernel/mm/transparent_hugepage/enabled'
You can cat /sys/kernel/mm/transparent_hugepage/enabled to see the current setting and to make sure that your echo worked.
The setting will be cleared when you reboot when setting it in this manner. TokuMX sets this in it's init script.
Please visit this question for a more thorough discussion: https://unix.stackexchange.com/questions/99154/disable-transparent-hugepages
You can add transparent_hugepage=never to the GRUB_CMDLINE_LINUX_DEFAULT option in /etc/default/grub and run sudo update-grub
To test that it took effectcat /sys/kernel/mm/transparent_hugepage/enabled should look like this: always madvise [never]

Jenkins CVS plugin does not detect changes

We've been running Jenkins 1.451 and 1.454 on Windows XP against a CVS repository for a few weeks now, without any problems. The CVS plugin (v1.6) was using the local cvsnt install.
We've since upgraded the CVS plugin to version 2.1 this morning and since then, CVS changes are not detected. The CVS polling log is triggered properly, tons of "cvs rlog" instructions are sent but at the end "No changes" is displayed.
Am I missing some configuration option somewhere?
Thanks.
Update 1: Looking into the entries file, I'm seeing incorrect times for recently updated files, the entry being 4 hours later than the actual change. Could this be related? I'm in the Eastern Time Zone (Montreal) with Daylight Saving Time in effect. The last cvs checkout command looked like this:
cvs checkout -P -r d-chg00014229_op_brc_preimp-op-2012-02-27 -D 23 Mar 2012 11:56:16 EDT -d portailInt portailInt
Update 2: The 4 hour difference corresponds to GMT-adjusted time, so it looks like there's a mixup in time zones somewhere. Using CVS plugin 1.6 the cvs polling command looked like this (executed at 5:26:21 PM EDT):
cvs -q -z3 -n update -PdC -r d-chg00014229_op_brc_preimp-op-2012-02-27 -D "Thursday, March 22, 2012 9:26:21 PM UTC"
Is it possible that the CVS server isn't properly interpreting the -D argument, either the parsing part or the time zone adjustment part?
Update 3: Behaviour is the same with CVS plugin 2.2
Update 4: Manual calls to "cvs rlog" do not return anything, while similar calls to "cvs log" return revision information for all module files.
cvs rlog -d"01 Mar 2012 09:26:21 -0400<27 Mar 2012 12:00:00 -0400" -S -rd-chg00014229_op_brc_preimp-op-2012-02-27 portailInt
cvs rlog: Logging portailInt
cvs log -d"01 Mar 2012 09:00:00 -0400<27 Mar 2012 12:00:00 -0400"
RCS file: /usr/local/cvs/repcvs/PortailInternetMouvement/portailInt/Portail/src/com/xxx/pvm/portail/taglib/I18nBundleTag.java,v
Working file: Portail/src/com/xxx/pvm/portail/taglib/I18nBundleTag.java
head: 1.3
branch:
locks: strict
access list:
symbolic names:
d-chg00014229_op_impl_2012-03-25_v06: 1.1.2.4
d-chg00014229_op_impl_2012-03-25_v05: 1.1.2.4
aq_op_2012-03-25_v04: 1.1.2.4
d-chg00014229_op_impl_2012-03-25_v04: 1.1.2.4
aq_op_2012-03-25_v03: 1.1.2.3
d-chg00014229_op_impl_2012-03-25_v03: 1.1.2.3
d-chg00014229_op_impl_2012-03-25_v02: 1.1.2.3
aq_op_2012-03-25_v01: 1.1
d-chg00014229_op_impl_2012-03-25_v01: 1.1
d-chg00014229_op_brc_preimp-op-2012-02-27: 1.1.0.2
preimp_op_2012-02-27: 1.1
keyword substitution: kv
total revisions: 8; selected revisions: 3
description:
----------------------------
revision 1.1.2.5
date: 2012/03/23 15:42:50; author: ba0chzi; state: Exp; lines: +4 -26
Organize imports
----------------------------
revision 1.1.2.4
date: 2012/03/13 14:18:27; author: ba0chmn; state: Exp; lines: +1 -1
Changement de scope de request ou session pour application dans le but d'améliorer les performances
----------------------------
revision 1.1.2.3
date: 2012/03/06 21:19:03; author: ba0chmn; state: Exp; lines: +14 -8
Utilisation des services de récupération de fichier dans UCM de xxx
Seems to be a bug. Documented here: https://issues.jenkins-ci.org/browse/JENKINS-13227

Using wget to do monitoring probes

Before I bang my head against all the issues myself I thought I'd run it by you guys and see if you could point me somewhere or pass along some tips.
I'm writing a really basic monitoring script to make sure some of my web applications are alive and answering. I'll fire it off out of cron and send alert emails if there's a problem.
So what I'm looking for are suggestions on what to watch out for. Grepping the output of wget will probably get me by, but I was wondering if there was a more programmatic way to get robust status information out of wget and my resulting web page.
This is a general kind of question, I'm just looking for tips from anybody who happens to have done this kind of thing before.
Check the exit code,
wget --timeout=10 --whatever http://example.com/mypage
if [ $? -ne 0 ] ; then
there's a pproblem, mail logs, send sms, etc.
fi
I prefer curl --head for this type of usage:
% curl --head http://stackoverflow.com/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Length: 359440
Content-Type: text/html; charset=utf-8
Expires: Tue, 05 Oct 2010 19:06:52 GMT
Last-Modified: Tue, 05 Oct 2010 19:05:52 GMT
Vary: *
Date: Tue, 05 Oct 2010 19:05:51 GMT
This will allow you to check the return status to make sure it's 200 (or whatever you're expecting it to be) and the content-length to make sure it's the expected value (or at least not zero.) And it will exit non-zero if there's any problem with the connection.
If you want to check for changes in the page content, pipe the output through md5 and then compare what you get to your pre-computed known value:
wget -O - http://stackoverflow.com | md5sum

Resources