.clang-tidy configuration file content is being ignored - clang

I want to modify the checks that the code analyzer program clang-tidy is doing, but it seems like the content of the configuration file .clang-tidy is being ignored.
I create the file by calling clang-tidy with the flag -dump-config and redirect the output to the file .clang-tidy.
Then I call sed to replace the value 800 with the value 700, which corresponds to the option with key google-readability-function-size.StatementThreshold. The specific option is not important to me, this is just for testing.
I verify that the value has indeed been changed.
Lastly, I rerun clang-tidy to see if it has accepted the new configuration, but it remains unchanged.
# generate config
clang-tidy -dump-config > .clang-tidy
# change config
sed -i 's/800/700/' .clang-tidy
# verify change
grep '700' .clang-tidy
# use config, does not work
clang-tidy -config '' -dump-config
The CheckOption remains at the default value, the content of the config file has been ignored:
CheckOptions:
# some lines omitted for brevity
- key: google-readability-function-size.StatementThreshold
value: '800'
Running clang-tidy -config '' -dump-config -explain-config shows that the configuration file has at least been found, i.e. many clang-analyzer specific checks are enabled in the detected config file, but the check google-readability-function-size.StatementThreshold is not listed.
I also tried passing the config directly as command line parameter with the command clang-tidy -config="{CheckOptions: [ {key: google-readability-function-size.StatementThreshold, value: 700} ]}" -dump-config, but got the same result.
The command clang-tidy --version gives the following output, running on Ubuntu 20.04:
LLVM (http://llvm.org/):
LLVM version 10.0.0
Optimized build.
Default target: x86_64-pc-linux-gnu
Host CPU: haswell

To see the change, you need to enable the check:
Checks: 'google-readability-function-size'
You can see it changed in the effective configuration with:
clang-tidy --dump-config
Another pitfall to be aware of is that errors parsing the values will be silently discarded.

Related

docker -w dir prefixed with another dir [duplicate]

Earlier today, I was trying to generate a certificate with a DNSName entry in the SubjectAltName extension:
$ openssl req -new -subj "/C=GB/CN=foo" -addext "subjectAltName = DNS:foo.co.uk" \
-addext "certificatePolicies = 1.2.3.4" -key ./private-key.pem -out ~/req.pem
This command led to the following error message:
name is expected to be in the format /type0=value0/type1=value1/type2=... where characters may be escaped by . This name is not in that format: 'C:/Program Files/Git/C=GB/CN=foo'
problems making Certificate Request
How can I stop Git Bash from treating this string parameter as a filepath, or at least stop it from making this alteration?
The release notes to the Git Bash 2.21.0 update today mentioned this as a known issue. Fortunately, they also described two solutions to the problem:
If you specify command-line options starting with a slash, POSIX-to-Windows path conversion will kick in converting e.g. "/usr/bin/bash.exe" to "C:\Program Files\Git\usr\bin\bash.exe". When that is not desired -- e.g. "--upload-pack=/opt/git/bin/git-upload-pack" or "-L/regex/" -- you need to set the environment variable MSYS_NO_PATHCONV temporarily, like so:
MSYS_NO_PATHCONV=1 git blame -L/pathconv/ msys2_path_conv.cc
Alternatively, you can double the first slash to avoid POSIX-to-Windows path conversion, e.g. "//usr/bin/bash.exe".
Using MSYS_NO_PATHCONV=1 can be problematic if your script accesses files.
Prefixing with a double forward slash doesn't work for the specific case of OpenSSL, as it causes the first DN segment key to be read as "/C" instead of "C", so OpenSSL drops it, outputting:
req: Skipping unknown attribute "/C"
Instead, I used a function that detects if running on bash for Windows, and prefixes with a "dummy" segment if so:
# If running on bash for Windows, any argument starting with a forward slash is automatically
# interpreted as a drive path. To stop that, you can prefix with 2 forward slashes instead
# of 1 - but in the specific case of openssl, that causes the first CN segment key to be read as
# "/O" instead of "O", and is skipped. We work around that by prefixing with a spurious segment,
# which will be skipped by openssl
function fixup_cn_subject() {
local result="${1}"
case $OSTYPE in
msys|win32) result="//XX=x${result}"
esac
echo "$result"
}
# Usage example
MY_SUBJECT=$(fixup_cn_subject "/C=GB/CN=foo")
Found a workaround by passing a dummy value as the first attribute, for example: -subj '//SKIP=skip/C=gb/CN=foo'
I had the same issue using bash, but running the exact same command in Powershell worked for me. Hopefully this will help someone.

What is the difference between reading in text mode and binary mode, when checking checksums using `sha1sum`?

I often compute checksums of files downloaded from the Internet, using shasum family of commands, without paying attention to the mode for reading. In particular, sha1sum usually defaults to text mode for applications.
What is the difference between reading in text mode and binary mode, when checking checksums using sha1sum?
~/Downloads$ sha1sum --help
Usage: sha1sum [OPTION]... [FILE]...
Print or check SHA1 (160-bit) checksums.
With no FILE, or when FILE is -, read standard input.
-b, --binary read in binary mode
-c, --check read SHA1 sums from the FILEs and check them
--tag create a BSD-style checksum
-t, --text read in text mode (default)
-z, --zero end each output line with NUL, not newline,
and disable file name escaping
The following five options are useful only when verifying checksums:
--ignore-missing don't fail or report status for missing files
--quiet don't print OK for each successfully verified file
--status don't output anything, status code shows success
--strict exit non-zero for improperly formatted checksum lines
-w, --warn warn about improperly formatted checksum lines
--help display this help and exit
--version output version information and exit
The sums are computed as described in FIPS-180-1. When checking, the input
should be a former output of this program. The default mode is to print a
line with checksum, a space, a character indicating input mode ('*' for binary,
' ' for text or where binary is insignificant), and name for each FILE.
GNU coreutils online help: <https://www.gnu.org/software/coreutils/>
Full documentation <https://www.gnu.org/software/coreutils/sha1sum>
or available locally via: info '(coreutils) sha1sum invocation'
None.
At least judging by the answer to the similar question on the difference between text and binary in md5sum.
It appears it flags are for standards compliance or something

Remove some main commands and/or default options from waf in wscript

I have a waf script which adds some options, therefore I use Options from the waflib.
A minimal working example is:
from waflib import Context, Options
from waflib.Tools.compiler_c import c_compiler
def options(opt):
opt.load('compiler_c')
def configure(cnf):
cnf.load('compiler_c')
cnf.env.abc = 'def'
def build(bld):
print('hello')
Which lead to a lot of options I do not support, but others I would like to or have to support. The full list of default support commands is shown below. But how do I remove the options that are actually not supported like
some main commands, like e.g., dist, step and install or
some options like e.g., --no-msvs-lazy or
some Configuration options like e.g., -t or
completely the whole section Installation and uninstallation options
The full ouput of options is then:
waf [commands] [options]
Main commands (example: ./waf build -j4)
build : executes the build
clean : cleans the project
configure: configures the project
dist : makes a tarball for redistributing the sources
distcheck: checks if the project compiles (tarball from 'dist')
distclean: removes build folders and data
install : installs the targets on the system
list : lists the targets to execute
step : executes tasks in a step-by-step fashion, for debugging
uninstall: removes the targets installed
Options:
--version show program's version number and exit
-c COLORS, --color=COLORS
whether to use colors (yes/no/auto) [default: auto]
-j JOBS, --jobs=JOBS amount of parallel jobs (8)
-k, --keep continue despite errors (-kk to try harder)
-v, --verbose verbosity level -v -vv or -vvv [default: 0]
--zones=ZONES debugging zones (task_gen, deps, tasks, etc)
-h, --help show this help message and exit
--msvc_version=MSVC_VERSION
msvc version, eg: "msvc 10.0,msvc 9.0"
--msvc_targets=MSVC_TARGETS
msvc targets, eg: "x64,arm"
--no-msvc-lazy lazily check msvc target environments
Configuration options:
-o OUT, --out=OUT build dir for the project
-t TOP, --top=TOP src dir for the project
--prefix=PREFIX installation prefix [default: 'C:\\users\\user\\appdata\\local\\temp']
--bindir=BINDIR bindir
--libdir=LIBDIR libdir
--check-c-compiler=CHECK_C_COMPILER
list of C compilers to try [msvc gcc clang]
Build and installation options:
-p, --progress -p: progress bar; -pp: ide output
--targets=TARGETS task generators, e.g. "target1,target2"
Step options:
--files=FILES files to process, by regexp, e.g. "*/main.c,*/test/main.o"
Installation and uninstallation options:
--destdir=DESTDIR installation root [default: '']
-f, --force force file installation
--distcheck-args=ARGS
arguments to pass to distcheck
For options, The option context has a parser attribute which is a python optparse.OptionParser. You can use the remove_option method of OptionParser:
def options(opt):
opt.parser.remove_option("--top")
opt.parser.remove_option("--no-msvs-lazy")
For commands, there is a metaclass in waf that automatically register Context classes (see waflib.Context sources).
So all Context classes are stored in the global variable waflib.Context.classes. To get rid of them you can manipulate this variable. For instance to get rid of StepContext and such, you can do something like:
import waflib
def options(opt):
all_contexts = waflib.Context.classes
all_contexts.remove(waflib.Build.StepContext)
all_contexts.remove(waflib.Build.InstallContext)
all_contexts.remove(waflib.Build.UninstallContext)
Commands dist/distcheck are special case defined in waflib.Scripting. It's not easy to get rid of them.

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Write current svn version into text file

I have a rails site. I'd like, on mongrel restart, to write the current svn version into public/version.txt, so that i can then put this into a comment in the page header.
The problem is getting the current local version of svn - i'm a little confused.
If, for example, i do svn update on a file which hasn't been updated in a while i get "At revision 4571.". However, if i do svn info, i get
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson_planner
Repository UUID: #########
Revision: 4570
Node Kind: directory
Schedule: normal
Last Changed Author: max
Last Changed Rev: 4570
Last Changed Date: 2009-11-30 17:14:52 +0000 (Mon, 30 Nov 2009)
Note this says revision 4570, 1 lower than the previous command.
Can anyone set me straight and show me how to simply get the current version number?
thanks, max
Subversion comes with a command for doing exactly this: SVNVERSION.EXE.
usage: svnversion [OPTIONS] [WC_PATH [TRAIL_URL]]
Produce a compact 'version number' for the working copy path
WC_PATH. TRAIL_URL is the trailing portion of the URL used to
determine if WC_PATH itself is switched (detection of switches
within WC_PATH does not rely on TRAIL_URL). The version number
is written to standard output. For example:
$ svnversion . /repos/svn/trunk
4168
The version number will be a single number if the working
copy is single revision, unmodified, not switched and with
an URL that matches the TRAIL_URL argument. If the working
copy is unusual the version number will be more complex:
4123:4168 mixed revision working copy
4168M modified working copy
4123S switched working copy
4123:4168MS mixed revision, modified, switched working copy
If invoked on a directory that is not a working copy, an
exported directory say, the program will output 'exported'.
If invoked without arguments WC_PATH will be the current directory.
Valid options:
-n [--no-newline] : do not output the trailing newline
-c [--committed] : last changed rather than current revisions
-h [--help] : display this help
--version : show version information
I use the following shell script snippet to create a header file svnversion.h which defines a few constant character strings I use in compiled code. You should be able to something very similar:
#!/bin/sh -e
svnversion() {
svnrevision=`LC_ALL=C svn info | awk '/^Revision:/ {print $2}'`
svndate=`LC_ALL=C svn info | awk '/^Last Changed Date:/ {print $4,$5}'`
now=`date`
cat <<EOF > svnversion.h
// Do not edit! This file was autogenerated
// by $0
// on $now
//
// svnrevision and svndate are as reported by svn at that point in time,
// compiledate and compiletime are being filled gcc at compilation
#include <stdlib.h>
static const char* svnrevision = "$svnrevision";
static const char* svndate = "$svndate";
static const char* compiletime = __TIME__;
static const char* compiledate = __DATE__;
EOF
}
test -f svnversion.h || svnversion
This assumes that you would remove the created header file to trigger the build of a fresh one.
If you just want to print latest revision of the repository, you can use something like this:
svn info <repository_url> -rHEAD | grep '^Revision: ' | awk '{print $2}'
You can use capistrano for deployment, it creates REVISION file, which you can copy to public/version.txt
It seems that you are running svn info on the directory, but svn update on a specific file. If you update the directory to revision 4571, svn info should print:
Path: .
URL: http://my.url/trunk
Repository Root: http://my.url/lesson%5Fplanner
Repository UUID: #########
Revision: 4571
[...]
Last Changed Rev: 4571
Note that the "last changed revision" does not necessarily align with the latest revision of the repository.
Thanks to everyone who suggested capistrano and svninfo.
We do actually use capistrano, and it does indeed make this REVISION file, which i guess i saw before but didn't pay attention to. As it happens, though, this isn't quite what i need because it only gets updated on deploy, whereas sometimes we might sneakily update a couple of files then restart, rather than doing a full deploy.
I ended up doing my own file using svninfo, grep and awk as many people have suggested here, and putting it in public. This is created on mongrel start, which is part of the deploy process and the restart process so gets done both times.
thanks all!

Resources