Remove some main commands and/or default options from waf in wscript - waf

I have a waf script which adds some options, therefore I use Options from the waflib.
A minimal working example is:
from waflib import Context, Options
from waflib.Tools.compiler_c import c_compiler
def options(opt):
opt.load('compiler_c')
def configure(cnf):
cnf.load('compiler_c')
cnf.env.abc = 'def'
def build(bld):
print('hello')
Which lead to a lot of options I do not support, but others I would like to or have to support. The full list of default support commands is shown below. But how do I remove the options that are actually not supported like
some main commands, like e.g., dist, step and install or
some options like e.g., --no-msvs-lazy or
some Configuration options like e.g., -t or
completely the whole section Installation and uninstallation options
The full ouput of options is then:
waf [commands] [options]
Main commands (example: ./waf build -j4)
build : executes the build
clean : cleans the project
configure: configures the project
dist : makes a tarball for redistributing the sources
distcheck: checks if the project compiles (tarball from 'dist')
distclean: removes build folders and data
install : installs the targets on the system
list : lists the targets to execute
step : executes tasks in a step-by-step fashion, for debugging
uninstall: removes the targets installed
Options:
--version show program's version number and exit
-c COLORS, --color=COLORS
whether to use colors (yes/no/auto) [default: auto]
-j JOBS, --jobs=JOBS amount of parallel jobs (8)
-k, --keep continue despite errors (-kk to try harder)
-v, --verbose verbosity level -v -vv or -vvv [default: 0]
--zones=ZONES debugging zones (task_gen, deps, tasks, etc)
-h, --help show this help message and exit
--msvc_version=MSVC_VERSION
msvc version, eg: "msvc 10.0,msvc 9.0"
--msvc_targets=MSVC_TARGETS
msvc targets, eg: "x64,arm"
--no-msvc-lazy lazily check msvc target environments
Configuration options:
-o OUT, --out=OUT build dir for the project
-t TOP, --top=TOP src dir for the project
--prefix=PREFIX installation prefix [default: 'C:\\users\\user\\appdata\\local\\temp']
--bindir=BINDIR bindir
--libdir=LIBDIR libdir
--check-c-compiler=CHECK_C_COMPILER
list of C compilers to try [msvc gcc clang]
Build and installation options:
-p, --progress -p: progress bar; -pp: ide output
--targets=TARGETS task generators, e.g. "target1,target2"
Step options:
--files=FILES files to process, by regexp, e.g. "*/main.c,*/test/main.o"
Installation and uninstallation options:
--destdir=DESTDIR installation root [default: '']
-f, --force force file installation
--distcheck-args=ARGS
arguments to pass to distcheck

For options, The option context has a parser attribute which is a python optparse.OptionParser. You can use the remove_option method of OptionParser:
def options(opt):
opt.parser.remove_option("--top")
opt.parser.remove_option("--no-msvs-lazy")
For commands, there is a metaclass in waf that automatically register Context classes (see waflib.Context sources).
So all Context classes are stored in the global variable waflib.Context.classes. To get rid of them you can manipulate this variable. For instance to get rid of StepContext and such, you can do something like:
import waflib
def options(opt):
all_contexts = waflib.Context.classes
all_contexts.remove(waflib.Build.StepContext)
all_contexts.remove(waflib.Build.InstallContext)
all_contexts.remove(waflib.Build.UninstallContext)
Commands dist/distcheck are special case defined in waflib.Scripting. It's not easy to get rid of them.

Related

.clang-tidy configuration file content is being ignored

I want to modify the checks that the code analyzer program clang-tidy is doing, but it seems like the content of the configuration file .clang-tidy is being ignored.
I create the file by calling clang-tidy with the flag -dump-config and redirect the output to the file .clang-tidy.
Then I call sed to replace the value 800 with the value 700, which corresponds to the option with key google-readability-function-size.StatementThreshold. The specific option is not important to me, this is just for testing.
I verify that the value has indeed been changed.
Lastly, I rerun clang-tidy to see if it has accepted the new configuration, but it remains unchanged.
# generate config
clang-tidy -dump-config > .clang-tidy
# change config
sed -i 's/800/700/' .clang-tidy
# verify change
grep '700' .clang-tidy
# use config, does not work
clang-tidy -config '' -dump-config
The CheckOption remains at the default value, the content of the config file has been ignored:
CheckOptions:
# some lines omitted for brevity
- key: google-readability-function-size.StatementThreshold
value: '800'
Running clang-tidy -config '' -dump-config -explain-config shows that the configuration file has at least been found, i.e. many clang-analyzer specific checks are enabled in the detected config file, but the check google-readability-function-size.StatementThreshold is not listed.
I also tried passing the config directly as command line parameter with the command clang-tidy -config="{CheckOptions: [ {key: google-readability-function-size.StatementThreshold, value: 700} ]}" -dump-config, but got the same result.
The command clang-tidy --version gives the following output, running on Ubuntu 20.04:
LLVM (http://llvm.org/):
LLVM version 10.0.0
Optimized build.
Default target: x86_64-pc-linux-gnu
Host CPU: haswell
To see the change, you need to enable the check:
Checks: 'google-readability-function-size'
You can see it changed in the effective configuration with:
clang-tidy --dump-config
Another pitfall to be aware of is that errors parsing the values will be silently discarded.

How do I make a bazel `sh_binary` target depend on other binary targets?

I have set up bazel to build a number of CLI tools that perform various database maintenance tasks. Each one is a py_binary or cc_binary target that is called from the command line with the path to some data file: it processes that file and stores the results in a database.
Now, I need to create a dependent package that contains data files and shell scripts that call these CLI tools to perform application-specific database operations.
However, there doesn't seem to be a way to depend on the existing py_binary or cc_binary targets from a new package that only contains sh_binary targets and data files. Trying to do so results in an error like:
ERROR: /workspace/shbin/BUILD.bazel:5:12: in deps attribute of sh_binary rule //shbin:run: py_binary rule '//pybin:counter' is misplaced here (expected sh_library)
Is there a way to call/depend on an existing bazel binary target from a shell script using sh_binary?
I have implemented a full example here:
https://github.com/psigen/bazel-mixed-binaries
Notes:
I cannot use py_library and cc_library instead of py_binary and cc_binary. This is because (a) I need to call mixes of the two languages to process my data files and (b) these tools are from an upstream repository where they are already designed as CLI tools.
I also cannot put all the data files into the CLI tool packages -- there are multiple application-specific packages and they cannot be mixed.
You can either create a genrule to run these tools as part of the build, or create a sh_binary that depends on the tools via the data attribute and runs them them.
The genrule approach
This is the easier way and lets you run the tools as part of the build.
genrule(
name = "foo",
tools = [
"//tool_a:py",
"//tool_b:cc",
],
srcs = [
"//source:file1",
":file2",
],
outs = [
"output_file1",
"output_file2",
],
cmd = "$(location //tool_a:py) --input=$(location //source:file1) --output=$(location output_file1) && $(location //tool_b:cc) < $(location :file2) > $(location output_file2)",
)
The sh_binary approach
This is more complicated, but lets you run the sh_binary either as part of the build (if it is in a genrule.tools, similar to the previous approach) or after the build (from under bazel-bin).
In the sh_binary you have to data-depend on the tools:
sh_binary(
name = "foo",
srcs = ["my_shbin.sh"],
data = [
"//tool_a:py",
"//tool_b:cc",
],
)
Then, in the sh_binary you have to use the so-called "Bash runfiles library" built into Bazel to look up the runtime-path of the binaries. This library's documentation is in its source file.
The idea is:
the sh_binary has to depend on a specific target
you have to copy-paste some boilerplate code to the top of the sh_binary (reason is described here)
then you can use the rlocation function to look up the runtime-path of the binaries
For example your my_shbin.sh may look like this:
#!/bin/bash
# --- begin runfiles.bash initialization ---
...
# --- end runfiles.bash initialization ---
path=$(rlocation "__main__/tool_a/py")
if [[ ! -f "${path:-}" ]]; then
echo >&2 "ERROR: could not look up the Python tool path"
exit 1
fi
$path --input=$1 --output=$2
The __main__ in the rlocation path argument is the name of the workspace. Since your WORKSPACE file does not have a "workspace" rule in, which would define the workspace's name, Bazel will use the default workspace name, which is __main__.
An easier approach for me is to add the cc_binary as a dependency in the data section. In prefix/BUILD
cc_binary(name = "foo", ...)
sh_test(name = "foo_test", srcs = ["foo_test.sh"], data = [":foo"])
Inside foo_test.sh, the working directory is different, so you need to find the right prefix for the binary
#! /usr/bin/env bash
executable=prefix/foo
$executable ...
A clean way to do this is to use args and $(location):
Contents of BUILD:
py_binary(
name = "counter",
srcs = ["counter.py"],
main = "counter.py",
)
sh_binary(
name = "run",
srcs = ["run.sh"],
data = [":counter"],
args = ["$(location :counter)"],
)
Contents of counter.py (your tool):
print("This is the counter tool.")
Contents of run.sh (your bash script):
#!/bin/bash
set -eEuo pipefail
counter="$1"
shift
echo "This is the bash script, about to call the counter tool."
"$counter"
And here's a demo showing the bash script calling the Python tool:
$ bazel run //example:run 2>/dev/null
This is the bash script, about to call the counter tool.
This is the counter tool.
It's also worth mentioning this note (from the docs):
The arguments are not passed when you run the target outside of bazel (for example, by manually executing the binary in bazel-bin/).

Fortify, how to start analysis through command

How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'

How to re-build jna-4.1.0.jar with linux-s390x specific libjnidispatch.so

How can I rebuild jna-4.1.0.jar file to include the linux-s390x specific libjnidispatch.so file.
This is needed by one of my application and failing on the dependency of this libjnidispatch.so file.
Did try to follow this question: How to use JNAerator with multiple dynamic libraries under one header?
Syntax Used:
java -jar jnaerator-0.11-shaded.jar \
> -arch linux-s390x linux-s390x/libjnidispatch.so \
> -mode jna-3.3.0-jenkins-3.jar \
> -jar jna-3.3.0-jenkins-3_updated.jar
Getting below error:
ERROR: JNAeration failed !
#
# Error parsing arguments :
# -arch linux-s390x linux-s390x/libjnidispatch.so -mode jna-3.3.0-jenkins-3.jar -jar jna-3.3.0-jenkins-3_updated.jar : com.ochafik.lang.jnaerator.JNAerator$CommandLineException: Argument 'linux_s390x' is not one of the expected values :
# linux_x64,
# linux_x86,
# armeabi,
# sunos_x86,
# sunos_sparc,
# darwin_universal,
# win32,
# win64
# Please use -h for help on the command-line options available.
Clone the git repository.
Make sure you have a JDK installed, with Apache ant and native build tools (make, gcc, grep, etc).
Then just run ant native; ant jar.
Note that s390x may not be recognized out of the box, but if you look through the build files for how the other platforms are handled it should be straightforward to add in a switch for s390x (see build.xml and native/Makefile).
If you have a package distribution for a previous JNA version, that package may include the necessary packages and if so, consider submitting them to the JNA project as a github pull request to have them incorporating into the project proper.

How do I run a beam file compiled by Elixir or Erlang?

I have installed Erlang/OTP and Elixir, and compiled the HelloWorld program into a BEAM using the command:
elixirc test.ex
Which produced a file named Elixir.Hello.beam
How do I run this file?
Short answer: no way to know for sure without also knowing the contents of your source file :)
There are a few ways to run Elixir code. This answer will be an overview of various workflows that can be used with Elixir.
When you are just getting started and want to try things out, launching iex and evaluating expressions one at a time is the way to go.
iex(5)> Enum.reverse [1,2,3,4]
[4, 3, 2, 1]
You can also get help on Elixir modules and functions in iex. Most of the functions have examples in their docs.
iex(6)> h Enum.reverse
def reverse(collection)
Reverses the collection.
[...]
When you want to put some code into a file to reuse it later, the recommended (and de facto standard) way is to create a mix project and start adding modules to it. But perhaps, you would like to know what's going on under the covers before relying on mix to perform common tasks like compiling code, starting applications, and so on. Let me explain that.
The simplest way to put some expressions into a file and run it would be to use the elixir command.
x = :math.sqrt(1234)
IO.puts "Your square root is #{x}"
Put the above fragment of code into a file named simple.exs and run it with elixir simple.exs. The .exs extension is just a convention to indicate that the file is meant to be evaluated (and that is what we did).
This works up until the point you want to start building a project. Then you will need to organize your code into modules. Each module is a collection of functions. It is also the minimal compilation unit: each module is compiled into a .beam file. Usually people have one module per source file, but it is also fine to define more than one. Regardless of the number of modules in a single source file, each module will end up in its own .beam file when compiled.
defmodule M do
def hi(name) do
IO.puts "Hello, #{name}"
end
end
We have defined a module with a single function. Save it to a file named mymod.ex. We can use it in multiple ways:
launch iex and evaluate the code in the spawned shell session:
$ iex mymod.ex
iex> M.hi "Alex"
Hello, Alex
:ok
evaluate it before running some other code. For example, to evaluate a single expression on the command line, use elixir -e <expr>. You can "require" (basically, evaluate and load) one or more files before it:
$ elixir -r mymod.ex -e 'M.hi "Alex"'
Hello, Alex
compile it and let the code loading facility of the VM find it
$ elixirc mymod.ex
$ iex
iex> M.hi "Alex"
Hello, Alex
:ok
In that last example we compiled the module which produced a file named Elixir.M.beam in the current directory. When you then run iex in the same directory, the module will be loaded the first time a function from it is called. You could also use other ways to evaluate code, like elixir -e 'M.hi "..."'. As long as the .beam file can be found by the code loader, the module will be loaded and the appropriate function in it will be executed.
However, this was all about trying to play with some code examples. When you are ready to build a project in Elixir, you will need to use mix. The workflow with mix is more or less as follows:
$ mix new myproj
* creating README.md
* creating .gitignore
* creating mix.exs
[...]
$ cd myproj
# 'mix new' has generated a dummy test for you
# see test/myproj_test.exs
$ mix test
Add new modules in the lib/ directory. It is customary to prefix all module names with your project name. So if you take the M module we defined above and put it into the file lib/m.ex, it'll look like this:
defmodule Myproj.M do
def hi(name) do
IO.puts "Hello, #{name}"
end
end
Now you can start a shell with the Mix project loaded in it.
$ iex -S mix
Running the above will compile all your source file and will put them under the _build directory. Mix will also set up the code path for you so that the code loader can locate .beam files in that directory.
Evaluating expressions in the context of a mix project looks like this:
$ mix run -e 'Myproj.M.hi "..."'
Again, no need to compile anything. Most mix tasks will recompile any changed files, so you can safely assume that any modules you have defined are available when you call functions from them.
Run mix help to see all available tasks and mix help <task> to get a detailed description of a particular task.
To specifically address the question:
$ elixirc test.ex
will produce a file named Elixir.Hello.beam, if the file defines a Hello module.
If you run elixir or iex from the directory containing this file, the module will be available. So:
$ elixir -e Hello.some_function
or
$ iex
iex(1)> Hello.some_function
Assume that I write an Elixir program like this:
defmodule PascalTriangle do
defp next_row(m), do: for(x <- (-1..Map.size(m)-1), do: { (x+1), Map.get(m, x, 0) + Map.get(m, x+1, 0) } ) |> Map.new
def draw(1), do: (IO.puts(1); %{ 0 => 1})
def draw(n) do
(new_map = draw(n - 1) |> next_row ) |> Map.values |> Enum.join(" ") |> IO.puts
new_map
end
end
The module PascalTriangle can be used like this: PascalTriangle.draw(8)
When you use elixirc to compile the ex file, it will create a file called Elixir.PascalTriangle.beam.
From command line, you can execute the beam file like this:
elixir -e "PascalTriangle.draw(8)"
You can see the output similar to the photo:

Resources