If I want to write a pre-commit hook to check that, e.g., the string "I love pre-commit" isn't anywhere in my source code, I could do
- repo: local
hooks:
- id: love_statement
name: Check that I love pre-commit isn't in source code
types: [python]
entry: 'I love pre-commit'
language: pygrep
However, what if I want to do this opposite - that is, check that "I love pre-commit" is in every source code file? How could I modify my hook so that, instead of failing if "I love pre-commit" is found, it would fail if "I love pre-commit" isn't found?
this can now be done with
args: [--negate]
You can use a few regex tricks to do this:
repos:
- repo: local
hooks:
- id: love_statement
name: Check that I love pre-commit is in source code
types: [python]
args: [--multiline]
entry: '\A((?!I love pre-commit).)*\Z'
language: pygrep
this combines the following:
use the rough negative lookbehind pattern from this answer
use args: [--multiline] to push pygrep into whole-file matching mode
switch from ^ and $ (per line anchors) to \A and \Z (whole string anchors)
here's an example execution:
$ git ls-files -- '*.py' | xargs tail -n999
==> t.py <==
print('I do not love pre-commit')
==> t2.py <==
print('I love pre-commit')
$ pre-commit run --all-files
Check that I love pre-commit is in source code...........................Failed
- hook id: love_statement
- exit code: 1
t.py:1:print('I do not love pre-commit')
disclaimer: I'm the author of pre-commit
Related
I am using Ansible's shell module to find a particular string and store it in a variable. But if grep did not find anything I am getting an error.
Example:
- name: Get the http_status
shell: grep "http_status=" /var/httpd.txt
register: cmdln
check_mode: no
When I run this Ansible playbook if http_status string is not there, playbook is stopped. I am not getting stderr.
How can I make Ansible run without interruption even if the string is not found?
grep by design returns code 1 if the given string was not found. Ansible by design stops execution if the return code is different from 0. Your system is working properly.
To prevent Ansible from stopping playbook execution on this error, you can:
add ignore_errors: yes parameter to the task
use failed_when: parameter with a proper condition
Because grep returns error code 2 for exceptions, the second method seems more appropriate, so:
- name: Get the http_status
shell: grep "http_status=" /var/httpd.txt
register: cmdln
failed_when: "cmdln.rc == 2"
check_mode: no
You might also consider adding changed_when: false so that the task won't be reported as "changed" every single time.
All options are described in the Error Handling In Playbooks document.
Like you observed, ansible will stop execution if the grep exit code is not zero. You can ignore it with ignore_errors.
Another trick is to pipe the grep output to cat. So cat exit code will always be zero since its stdin is grep's stdout. It works if there is a match and also when there is no match. Try it.
- name: Get the http_status
shell: grep "http_status=" /var/httpd.txt | cat
register: cmdln
check_mode: no
I am trying to generate Dockerfiles with Ansible template - see the role source and the template in Ansible Galaxy and Github
I need to genarate a standard Dockerfile line like:
...
VOLUME ["/etc/postgresql/9.4"]
...
However, when I put this in the input file:
...
instruction: CMD
value: "[\"/etc/postgresql/{{postgresql_version}}\"]"
...
It ends up rendered like:
...
VOLUME ['/etc/postgresql/9.4']
...
and I lose the " (which renders the Dockerfiles useless)
Any help ? How can I convince Jinja to not substitute " with ' ? I tried \" , |safe filter, even {% raw %} - it just keeps doing it!
Update:
Here is how to reproduce the issue:
Go get the peruncs.docker role from galaxy.ansible.com or Github (link is given above)
Write up a simple playbook (say demo.yml) with the below content and run: ansible-playbook -v demo.yml. The -v option will allow you to see the temp directory where the generated Dockerfile goes with the broken content, so you can examine it. Generating the docker image is not important to succeed, just try to get the Dockerfile right.
- name: Build docker image
hosts: localhost
vars:
- somevar: whatever
- image_tag: "blabla/booboo"
- docker_copy_files: []
- docker_file_content:
- instruction: CMD
value: '["/usr/bin/runit", "{{somevar}}"]'
roles:
- peruncs.docker
Thanks in advance!
Something in Ansible appears to be recognizing that as valid Python, so it's getting transformed into a Python list and then serialized using Python's str(), which is why you end up with the single-quoted values.
An easy way to work around this is to stick a space at the beginning of the value, which seems to prevent it from getting converted into Python:
- name: Build docker image
hosts: localhost
vars:
- somevar: whatever
- image_tag: "blabla/booboo"
- docker_copy_files: []
- docker_file_content:
- instruction: CMD
value: ' ["/usr/bin/runit", "{{somevar}}"]'
roles:
- peruncs.docker
This results in:
CMD ["/usr/bin/runit", "whatever"]
I Have more than 1000 jobs in Jenkins,
And I would like to go through all of them in order to clean unused jobs.
What is the recommended way to do so?
I guess in every job "xml" file there is an indication to when it last ran,
Can anyone point me where this file is located?
I ended up filter the jobs by the "View job Filters" plugin,
You can use "Filter by Build Trend" option as follows:
Create a view for "All jobs" -> go to edit view -> in "add job filter " choose "Build Trend Filter" -> choose the filter you desire.
This is what I did:
I don't think you can do this in one step. But you can do this in 2 steps.
Find the URLs of all jobs with this:
https://jenkins-server/api/json?tree=jobs[url]
Get more info about each job by using the urls returned from step 1:
url-from-step1/api/json
This will give you the healthreport, last failed/successful build etc. If you need more info about these builds you can make a new request with :
url-from-step1/last-build-number/api/json
I recommend using JSON, and using JQ (http://stedolan.github.io/jq/, https://jqplay.org/) to parse your JSON
Happy coding!
You can leverage the REST API. The following urls might be relevant for you:
https://ci.jenkins-ci.org/api/xml?tree=jobs[name] -- to get a list of jobs
https://ci.jenkins-ci.org/job/{jobName}/lastBuild/buildTimestamp?format=yyyy-MM-dd-HH-mm-ss -- to get the time of last build of job {jobName}
Feel free to change xml to json/python...
I can provide a following shell script as a rough example:
#!/bin/bash
jenkinsUrlBase='https://ci.jenkins-ci.org'
callJenkins() {
curl --silent --show-error -g "$jenkinsUrlBase${1}"
}
callJenkins '/api/xml?tree=jobs[name]' | xmlstarlet sel -t -v '//hudson/job/name' | while read projectName ; do
timestamp=$(callJenkins "/job/${projectName}/lastBuild/buildTimestamp?format=yyyy-MM-dd-HH-mm-ss")
echo "Last build of ${projectName}: ${timestamp}"
done
You can exploit directory and file structure in ${JENKINS_HOME}:
cd ${JENKINS_HOME}/jobs/${JOB_NAME}/builds
ls -lt | head -2 | tail -1 | awk '{print $9}'
Example output:
2015-08-13_11-48-25
When input is read from terminal, GNU Parallel always displays a warning:
parallel: Warning: Input is read from the terminal. Only experts do this on purpose. Press CTRL-D to exit.
But sometimes I do want to read from terminal (e.g., when I'm copy & pasting stuff from elsewhere entry by entry). Is it possible to turn off this warning? I couldn't find such an option in man parallel or man parallel_tutorial.
Note that I don't want a cheap solution like 2>/dev/null, since warning messages from other programs will be turned off, too. For instance, consider the following simple script:
#!/bin/bash
function print12 () {
echo "printing $1 to stdout"
echo "printing $1 to stderr" >/dev/stderr
}
export -f print12
SHELL=/bin/bash parallel -k print12 2>/dev/null
Messages printed to stderr will all be suppressed.
Just realized that I can do a cat or some read </dev/tty to achieve my desired effect. But let's just focus on the original question.
It cannot be turned off. But see it as a praise: Since you are doing it on purpose, you are an expert (at least in the eyes of GNU Parallel).
As it is just a warning, you are free to paste your arguments and have them run: The warning does not stop GNU Parallel from reading your input.
If you really do not like the warning:
cat | parallel ...
I have a rails site. I'd like, on mongrel restart, to write the current svn version into public/version.txt, so that i can then put this into a comment in the page header.
The problem is getting the current local version of svn - i'm a little confused.
If, for example, i do svn update on a file which hasn't been updated in a while i get "At revision 4571.". However, if i do svn info, i get
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson_planner
Repository UUID: #########
Revision: 4570
Node Kind: directory
Schedule: normal
Last Changed Author: max
Last Changed Rev: 4570
Last Changed Date: 2009-11-30 17:14:52 +0000 (Mon, 30 Nov 2009)
Note this says revision 4570, 1 lower than the previous command.
Can anyone set me straight and show me how to simply get the current version number?
thanks, max
Subversion comes with a command for doing exactly this: SVNVERSION.EXE.
usage: svnversion [OPTIONS] [WC_PATH [TRAIL_URL]]
Produce a compact 'version number' for the working copy path
WC_PATH. TRAIL_URL is the trailing portion of the URL used to
determine if WC_PATH itself is switched (detection of switches
within WC_PATH does not rely on TRAIL_URL). The version number
is written to standard output. For example:
$ svnversion . /repos/svn/trunk
4168
The version number will be a single number if the working
copy is single revision, unmodified, not switched and with
an URL that matches the TRAIL_URL argument. If the working
copy is unusual the version number will be more complex:
4123:4168 mixed revision working copy
4168M modified working copy
4123S switched working copy
4123:4168MS mixed revision, modified, switched working copy
If invoked on a directory that is not a working copy, an
exported directory say, the program will output 'exported'.
If invoked without arguments WC_PATH will be the current directory.
Valid options:
-n [--no-newline] : do not output the trailing newline
-c [--committed] : last changed rather than current revisions
-h [--help] : display this help
--version : show version information
I use the following shell script snippet to create a header file svnversion.h which defines a few constant character strings I use in compiled code. You should be able to something very similar:
#!/bin/sh -e
svnversion() {
svnrevision=`LC_ALL=C svn info | awk '/^Revision:/ {print $2}'`
svndate=`LC_ALL=C svn info | awk '/^Last Changed Date:/ {print $4,$5}'`
now=`date`
cat <<EOF > svnversion.h
// Do not edit! This file was autogenerated
// by $0
// on $now
//
// svnrevision and svndate are as reported by svn at that point in time,
// compiledate and compiletime are being filled gcc at compilation
#include <stdlib.h>
static const char* svnrevision = "$svnrevision";
static const char* svndate = "$svndate";
static const char* compiletime = __TIME__;
static const char* compiledate = __DATE__;
EOF
}
test -f svnversion.h || svnversion
This assumes that you would remove the created header file to trigger the build of a fresh one.
If you just want to print latest revision of the repository, you can use something like this:
svn info <repository_url> -rHEAD | grep '^Revision: ' | awk '{print $2}'
You can use capistrano for deployment, it creates REVISION file, which you can copy to public/version.txt
It seems that you are running svn info on the directory, but svn update on a specific file. If you update the directory to revision 4571, svn info should print:
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson%5Fplanner
Repository UUID: #########
Revision: 4571
[...]
Last Changed Rev: 4571
Note that the "last changed revision" does not necessarily align with the latest revision of the repository.
Thanks to everyone who suggested capistrano and svninfo.
We do actually use capistrano, and it does indeed make this REVISION file, which i guess i saw before but didn't pay attention to. As it happens, though, this isn't quite what i need because it only gets updated on deploy, whereas sometimes we might sneakily update a couple of files then restart, rather than doing a full deploy.
I ended up doing my own file using svninfo, grep and awk as many people have suggested here, and putting it in public. This is created on mongrel start, which is part of the deploy process and the restart process so gets done both times.
thanks all!