Write current svn version into text file - ruby-on-rails

I have a rails site. I'd like, on mongrel restart, to write the current svn version into public/version.txt, so that i can then put this into a comment in the page header.
The problem is getting the current local version of svn - i'm a little confused.
If, for example, i do svn update on a file which hasn't been updated in a while i get "At revision 4571.". However, if i do svn info, i get
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson_planner
Repository UUID: #########
Revision: 4570
Node Kind: directory
Schedule: normal
Last Changed Author: max
Last Changed Rev: 4570
Last Changed Date: 2009-11-30 17:14:52 +0000 (Mon, 30 Nov 2009)
Note this says revision 4570, 1 lower than the previous command.
Can anyone set me straight and show me how to simply get the current version number?
thanks, max

Subversion comes with a command for doing exactly this: SVNVERSION.EXE.
usage: svnversion [OPTIONS] [WC_PATH [TRAIL_URL]]
Produce a compact 'version number' for the working copy path
WC_PATH. TRAIL_URL is the trailing portion of the URL used to
determine if WC_PATH itself is switched (detection of switches
within WC_PATH does not rely on TRAIL_URL). The version number
is written to standard output. For example:
$ svnversion . /repos/svn/trunk
4168
The version number will be a single number if the working
copy is single revision, unmodified, not switched and with
an URL that matches the TRAIL_URL argument. If the working
copy is unusual the version number will be more complex:
4123:4168 mixed revision working copy
4168M modified working copy
4123S switched working copy
4123:4168MS mixed revision, modified, switched working copy
If invoked on a directory that is not a working copy, an
exported directory say, the program will output 'exported'.
If invoked without arguments WC_PATH will be the current directory.
Valid options:
-n [--no-newline] : do not output the trailing newline
-c [--committed] : last changed rather than current revisions
-h [--help] : display this help
--version : show version information

I use the following shell script snippet to create a header file svnversion.h which defines a few constant character strings I use in compiled code. You should be able to something very similar:
#!/bin/sh -e
svnversion() {
svnrevision=`LC_ALL=C svn info | awk '/^Revision:/ {print $2}'`
svndate=`LC_ALL=C svn info | awk '/^Last Changed Date:/ {print $4,$5}'`
now=`date`
cat <<EOF > svnversion.h
// Do not edit! This file was autogenerated
// by $0
// on $now
//
// svnrevision and svndate are as reported by svn at that point in time,
// compiledate and compiletime are being filled gcc at compilation
#include <stdlib.h>
static const char* svnrevision = "$svnrevision";
static const char* svndate = "$svndate";
static const char* compiletime = __TIME__;
static const char* compiledate = __DATE__;
EOF
}
test -f svnversion.h || svnversion
This assumes that you would remove the created header file to trigger the build of a fresh one.

If you just want to print latest revision of the repository, you can use something like this:
svn info <repository_url> -rHEAD | grep '^Revision: ' | awk '{print $2}'
You can use capistrano for deployment, it creates REVISION file, which you can copy to public/version.txt

It seems that you are running svn info on the directory, but svn update on a specific file. If you update the directory to revision 4571, svn info should print:
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson%5Fplanner
Repository UUID: #########
Revision: 4571
[...]
Last Changed Rev: 4571
Note that the "last changed revision" does not necessarily align with the latest revision of the repository.

Thanks to everyone who suggested capistrano and svninfo.
We do actually use capistrano, and it does indeed make this REVISION file, which i guess i saw before but didn't pay attention to. As it happens, though, this isn't quite what i need because it only gets updated on deploy, whereas sometimes we might sneakily update a couple of files then restart, rather than doing a full deploy.
I ended up doing my own file using svninfo, grep and awk as many people have suggested here, and putting it in public. This is created on mongrel start, which is part of the deploy process and the restart process so gets done both times.
thanks all!

Related

Is there any way to clone only particular folder/files in the plastic SCM instead of entire branch?

I want to copy/clone only particular files from repo.
I tried many ways using cm cmd line executable of plastic.
cm getfile --help
Downloads the content of a given revision.
Usage:
cm getfile | cat <revspec> [--file=<output_file>] [--debug]
[--symlink] [--raw]
revspec Object specification. (Use 'cm help objectspec' to learn
more about specs.)
Options:
--file File to save the output. By default, it is printed on the
standard output.
--debug When a directory specification is used, the command
shows all the items in the directory, its revision id
and file system protection.
--symlink Applies the operation to the symlink and not to the
target.
--raw Displays the raw data of the file.
Examples:
cm cat myfile.txt#br:/main
(Obtains the last revision in branch 'br:/main' of 'myfile.txt'.)
cm getfile myfile.txt#cs:3 --file=tmp.txt
(Obtains the changeset 3 of 'myfile.txt' and write it to file 'tmp.txt'.)
cm cat serverpath:/src/foo.c#br:/main/task003#myrepo
(Obtains the contents of '/src/foo.c' at the last changeset of branch
'/main/task003' in repository 'myrepo'.)
cm cat revid:1230#rep:myrep#repserver:myserver:8084
(Obtains the revision with id 1230.)
cm getfile rev:info\ --debug
(Obtains all revisions in the 'info' directory.)

Fortify, how to start analysis through command

How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'

How to create xpi file with 7Zip?

I would like to pack my firefox extension as xpi file. I tried by adding it to archive and name it as filename.xpi
But when i try to install it on firefox am getting "package corrupted" message. Is there any way i can create a valid xpi file ?
I have installed cygwin and tried to execute zip command to create xpi file. But got zip is not a command error.
Can somebody guide me to get it done ?
If you are on windows (to install cygwin it looks like you do), you can use the windows built in tool:
Select the contents of the extension (remember, don't select the outside folder).
Right Click
Send to
Compressed (zipped) folder
Then just replace the .zip for .xpi in the filename
Looks like your problem is on completing the point 1. correctly. Select only the contents of the extension. Not the folder that contains it.
So basically your zip file should have following structure:
my_extension.zip
|- install.rdf
|- chrome.manifest
|- <chrome>
and NOT this structure:
my_extension.zip
|- <my_extension>
|- install.rdf
|- chrome.manifest
|- <chrome>
I experienced the same problems today and found the error to be that the add-on was obviously not signed by Mozilla, causing Firefox to refuse the installation. Up until recently, it was possible to by-pass this security check by setting xpinstall.signatures.required to false in about:config. However, as of Firefox 46, signing is mandatory and no by-pass is provided any longer, see https://blog.mozilla.org/addons/2016/01/22/add-on-signing-update/ This means that one has to either downgrade to a previous version or use a non release channel version to test one's addons :(
Also, here's how I pack an extension for Firefox with command line 7z:
cd /the/extension/folder/
7z a ../<extension_name>.xpi * -r
(where 'a' stands for "add/create" and "-r" for recursive)
Or to update the extension with the file(s) we just edited:
cd /the/extension/folder/
7z u ../<extension_name>.xpi * -r
("u" for update the archive's content)
Two methods, using the GUI 7zFM.exe, or a command line or batch file.
1.0) GUI method. Assuming 7-Zip is installed with shell integration so you see 7-Zip show up in the context-menu (right-click of selected files) of Windows Explorer.
1.a) Go into the folder of your add-on.
1.b) Select all the files and folders you want to include in the .xpi. Assuming you don't have any files you want to ignore down in any sub-folders. If you do, you might want to use the command line option.
1.c) Right-click on the list of selected files, find the 7z icon, choose the Add to archive... option.
1.d) A dialog pops up. Edit the location and name of the zip file, change to .zip to .xpi, etc.
1.e) Note if you create the .xpi in the same folder, don't re-archive it in the future, as your add-on will fail horribly. You never want an .xpi ending up inside your .xpi by accident. I usually just create it in the parent folder, by adding ..\ to the beginning of the file name, e.g. ..\addon-1.2.3-fx.xpi
1.f) 7-Zip has a lot of powerful compression options, not all of which Firefox can handle. Choose settings which Firefox is able to process. Refer to image.
2.0) Command Line method. Assuming you're in Windows, and know how to open a command prompt, change drives and directories (a.k.a. folders).
2.a) CD to your add-on directory.
2.b) Use the most basic 7-Zip command line.
"C:\Program Files\7-Zip\7z.exe" a -tzip addon-1.2.3-fx.xpi *
2.c) You can get a smaller file by finding the exact command line options which correspond to the above GUI, namely:
"C:\Program Files\7-Zip\7z.exe" a -tzip -mx=9 -mm=Deflate -mfb=258 -mmt=8 "addon-1.2.3-fx.xpi" *
Note that there is no Dictionary size = 32kb option when using Deflate Compression method. Otherwise, the options are in order and correspond to the GUI.
|-----------------------|---------|--------------|
| Option / Parameter | GUI | Command line |
|-----------------------|---------|--------------|
| Archive format | zip | -tzip |
| Compression level | Ultra | -mx=9 |
| Compression method | Deflate | -mm=Deflate |
| Dictionary size | 32 KB | (none) |
| Word size | 258 | -mfb=258 |
| Number of CPU threads | 8 | -mmt=8 |
|-----------------------|---------|--------------|
| Additional Parameters | | |
|-----------------------|---------|--------------|
| Recurse into Folders | (none) | -r |
| Multiple passes | (none) | -mpass=15 |
| Preserve Timestamps | (none) | -mtc=on |
| Ignore files in list | | -x#{ignore} |
|-----------------------|---------|--------------|
Notes:
i) The multi-thread option (-mmt=8) is specific to my system which has 8 cores. You will need to lower this to 6 or 4 or 2 or 1 (i.e. remove option) if you have fewer cores, etc, or increase if you have more. Won't make much difference either way for a small extension.
ii) The option to recurse into folder may or may not be the default, so specifying this option should ensure proper recursion.
iii) The option to preserve windows timestamps (creation, access, modification) should default to on anyways, so may not be needed.
iv) The ignore files in list option is any file which has a list of files and wildcards of files you wish to exclude.
2.d) Advanced topic #1: ignore file list (examples)
|----------------|------------------------------------|
| What to Ignore | Why to Ignore |
|----------------|------------------------------------|
| TODO.txt | Informal reminders of code to fix. |
| *.xpi | In case you forget warning above! |
| .ignore | Ignore the ignore file list. |
| ignore.txt | Same thing, if you used this name. |
|----------------|------------------------------------|
"C:\Program Files\7-Zip\7z.exe" a -tzip -mx9 -mm=Deflate -mfb=258 -mmt=8 -mpass=15 -mtc=on "addon-1.2.3-fx.xpi" * -x#ignore.txt
2.e) Advanced topic #2: Batch file (Windows CMD.EXE), assuming fairly recent windows, i.e. from the 21st century. This can be as simple and rigid, or complex and flexible as you care to make it. A general balance is to assume you will be in the Command Prompt, in the top level directory of the add-on you are working on, and that you have intelligently named that directory to have the same basename of the .xpi file e.g. D:\dev\addon-1.2.3-fx directory for the addon-1.2.3-fx.xpi add-on xpi. This batch file makes this assumption, and dynamically figures out the correct basename to use for the .xpi.
#ECHO OFF
REM - xpi.bat - batch file to create Mozilla add-on xpi using 7-Zip
REM - This finds the folder name, and discards the rest of the full path, saves in an environment variable.
FOR %%* IN (.) DO SET XPI=%%~nx*
REM - Uncomment the DEL line, or delete .xpi file manually, if it gets corrupted or includes some other junk by accident.
REM DEL "%XPI%.xpi"
REM - Command line which does everything the GUI does, but also lets you run several passes for the smallest .xpi possible.
"C:\Program Files\7-Zip\7z.exe" a -tzip -r -mx=9 -mm=Deflate -mfb=258 -mmt=8 -mpass=15 -mtc=on "%XPI%.xpi" * -x#ignore.txt
REM - Cleanup the environment variable.
SET XPI=
When pack extension using 7z, compress into .zip and then rename to .xpi, dont compress i
Do as per the following while using 7z
Select only the inner contents and not the outer folder.
Enter the filename as filename.xpi and choose archive format as zip in the prompt that appears while zipping.
You will find a valid xpi file created.
Use the created xpi for installing your extension on firefox.
It works!
Just zip all the files and folders inside my_extension folder and change the resulting zipped file's extension to my_extension.xpi
/my_extension
|- defaults/
|- locale/
|- resources/
|- install.rdf
|- ... (other files and folders)
Installation of xpi file created from zipped file of my_extension folder will result error as
"This add-on could not be installed because it appears to be corrupt." error
I try myself to build a zip in several ways because I was convinced I do something wrong 'cause all i got was "package corrupted" stuff .
well.. not anymore and I do not even need to load it from Load temporary add-on (now i drag and drop the xpi file from the desktop over Waterfox and I install it as legit xpi file!
How I do that?
'Cause I try myself the github stuff I load it first in Load temporary add-on (url:about:debugging#addons) the xpi file using the method used by user314159 with the .bat file method that use 7zip.
after you load that you should read somewhere something similar to:
Extension ID
86257e65ca311ee368ffcee50598ce25733a049b#temporary-addon
then all you should do is add inside manifest.json modifying the "applications" :
"applications": {
"gecko": {
"strict_min_version": "54.0a1",
"id": "86257e65ca311ee368ffcee50598ce25733a049b#temporary-addon"
}
},
after this push Remove to uninstall the temporary addon then you should build the xpi again like you did before
now is a normal xpi file SIGNED what you can install normal ! (here is works without modifying anything else)
I use Waterfox x64 i's seems to be problems to Firefox
the answer is you should upload your extension on the hub then to use mozilla signing api
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Distribution
Create file config.js
Insert code into config.js
//
try {
Components.utils.import("resource://gre/modules/addons/XPIProvider.jsm", {})
.eval("SIGNED_TYPES.clear()");
}
catch(ex) {}
Move config.js to application work folder, eg: C:\Program Files\Mozilla Firefox\
Create config-prefs.js and write code into:
pref("general.config.obscure_value", 0);
pref("general.config.filename", "config.js");
Place config-pres.js to C:\Program Files\Mozilla Firefox\defaults\pref\
Restart Firefox
Look result

jenkins plugin for triggering build whenever any file changed in a given directory

I am looking for functionality where we have a directory with some files in it.
Whenever any one makes a change in any of the files in the directory, jenkins shoukd trigger a build.
Is there any plugin or mathod for this functionality. Please advise.
Thanks in advance.
I have not tried it myself, but The FSTrigger plugin seems to do what you want:
FSTrigger provides polling mechanisms to monitor a file system and
trigger a build if a file or a set of files have changed.
If you can monitor the directory with a script, you can trigger the build with a HTTP GET, for example with wget or curl:
wget -O- $JENKINS_URL/job/JOBNAME/build
Although slightly related.. it seems like this issue was about monitoring static files on system.. however there are many version control systems for just this purpose.
I answered this in another post if you're using git to track changes on the files themselves:
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
You can add the check directly to the top of the job's exec shell, and it will exit 0 if no changes detected.. Hence, you can always poll the top level of the repo for check-in's to trigger a build. And only complete a build if the files in question change.

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Resources