I found that adding a print statement to my bazel macros and rule implementations result in console output added to the build such as
DEBUG: /home/$USER/repo/source.bzl:82:5: message XYZ
And I can even introspect a bit of objects with standard python techniques like...
def my_macro(my_list):
print(my_list)
print(type(my_list))
print(dir(my_list))
DEBUG: /home/$USER/repo/source.bzl:83:5: ["//visibility:public"]
DEBUG: /home/$USER/repo/source.bzl:84:5: list
DEBUG: /home/$USER/repo/source.bzl:85:5: ["append", "extend", "index", "insert", "pop", "remove"]
Is there anyway to access things like in the traceback module to look at stack traces and what not? Maybe even something like importing pdb and setting a break point?
There's a lil section in the documentation dedicated to macro debugging
https://docs.bazel.build/versions/master/skylark/macros.html#debugging
You can also use print for debugging.
Where the print function is explicitly pointed out a handy link is given which redirects you to the globals functions.
https://docs.bazel.build/versions/master/skylark/lib/globals.html
There you can see the type and dir entries. Not seeing anything stack trace oriented just some techniques for probing the current call stack context.
Related
a project I'm working on -- Envoy proxy -- uses Bazel and tcmalloc. I'd like to configure it to use the debug version of tcmalloc when compiling for debug and fastbuild, and use the optimized one for optimized builds.
There are other conditions as well, e.g. a command-line flag passed to bazel to turn off tcmalloc completely, using this logic:
https://github.com/envoyproxy/envoy/blob/7d2e84d3d0f8a4ffbf4257c450b3e5a6d93d4697/bazel/envoy_build_system.bzl#L166
def tcmalloc_external_dep(repository):
return select({
repository + "//bazel:disable_tcmalloc": None,
"//conditions:default": envoy_external_dep_path("tcmalloc_and_profiler"),
})
I have PR out (https://github.com/envoyproxy/envoy/pull/5424) failing continuous integration which changes the logic (https://github.com/envoyproxy/envoy/blob/1ed5aba5894ce519181edbdaee3f52c2971befaf/bazel/envoy_build_system.bzl#L156) to:
def tcmalloc_external_dep(repository):
return select({
repository + "//bazel:disable_tcmalloc": None,
repository + "//bazel:dbg_build": envoy_external_dep_path("tcmalloc_debug"),
"//conditions:default": envoy_external_dep_path("tcmalloc_and_profiler"),
})
However this does not work as we allow disabling tcmalloc on debug builds (which we do in continuous-integration scripts when running tsan). This runs afoul of bazel which evidently expects the conditions to be mutually exclusive, when I want "first matching rule wins" in this case. I get this error:
ERROR: /home/jmarantz/git4/envoy/test/common/network/BUILD:58:1: Illegal ambiguous match on configurable attribute "malloc" in //test/common/network:dns_impl_test:
//bazel:disable_tcmalloc
//bazel:dbg_build
Multiple matches are not allowed unless one is unambiguously more specialized.
ERROR: Analysis of target '//test/common/network:dns_impl_test' failed; build aborted:
/home/jmarantz/git4/envoy/test/common/network/BUILD:58:1: Illegal ambiguous match on configurable attribute "malloc" in //test/common/network:dns_impl_test:
//bazel:disable_tcmalloc
//bazel:dbg_build
What's the best way to solve this? Can I use a Python conditional on the bazel command-line settings? Can I use AND or OR operators in the conditional expressions to make them mutually exclusive? Or is there another approach I could use?
Not an answer, but perhaps I can give you some ideas:
As of now, you can simulate and and or by nesting selects or refactoring your config_settings.
There is a proposal for some changes to add flexibility here:
https://github.com/bazelbuild/proposals/blob/master/designs/2018-11-09-config-setting-chaining.md
You might also find some useful ideas in Skylib.
https://github.com/bazelbuild/bazel-skylib
Yup you can chain select using https://github.com/bazelbuild/bazel-skylib/blob/master/lib/selects.bzl#L80. You can also write your own feature flag rule that can be used in the select and that has artibrary logic in it, see https://source.bazel.build/bazel/+/0faef9148362a5234df3507441dadb0f32ade59a:tools/cpp/compiler_flag.bzl for example, it's a rule that can be used in selects and that gets the current C++ toolchain and inspects its state and returns its compiler value. You'll have to follow the thread a bit to see all the pieces. I'll ask for better docs for this.
I'm reading some source codes of a project, which is a combination of c++ and lua, they are interwined through luabind.
There is a la.lua file, in which there is a function exec(arg). The lua file also uses functions/variables from other lua file, so it has statements as below in the beginning
module(..., package.seeall);
print("Loading "..debug.getinfo(1).source.."...")
require "client_config"
now I want to run la.exec() from interactive terminal(on linux), but I get errors like
attempt to index global 'lg' (a nil value)
if I want to import la.lua, I get
require "la"
Loading #./la.lua...
./la.lua:68: attempt to index global 'ld' (a nil value)
stack traceback:
./lg.lua:68: in main chunk
[C]: in function 'require'
stdin:1: in main chunk
[C]: ?
what can I do?
Well, what could be going wrong?
(Really general guesswork following, there's not much information in what you provided…)
One option is that you're missing dependencies because the files don't properly require all the things they depend on. (If A depends on & requires B and then C, and C depends on B but doesn't require it because it's implicitly loaded by A, directly loading C will fail.) So if you throw some hours at tracking down & fixing dependencies, things might suddenly work.
(However, depending on how the modules are written this may be impossible without a lot of restructuring. As an example, unless you set package.loaded["foo"] to foo's module table in foo before loading submdules, those submodules cannot require"foo". (Luckily, module does that, in newer code without module that's often forgotten – and then you'll get an endless loop (until the stack overflows) of foo loading other modules which load foo which loads other modules which …) Further, while "fixing" things so they load in the interpreter you might accidentally break the load order used by the program/library under normal operation which you won't notice until you try to run that one normally again. So it may simply cost too much time to fix dependencies. You might still be able to track down enough to construct a long lua -lfoo-lbar… one-off dependency list which might get things to run, but don't depend on it.)
Another option is that there are missing parts provided by C(++) modules. If these are written in the style of a Lua library (i.e. they have luaopen_FOO), they might load in the interpreter. (IIRC that's unlikely for C++ because it expects the main program to be C++-aware but lua is (usually? always?) plain C.) It's also possible that these modules don't work that way and need to be loaded in some other way. Yet another possibility might be that the main program pre-defines things in the Lua state(s) that it creates, which means that there is no module that you could load to get those things.
While there are some more variations on the above, these should be all of the general categories. If you suspect that your problem is the first one (merely missing dependency information), maybe throw some more time at this as you have a pretty good chance of getting it to work. If you suspect it's one of the latter two, there's a very high chance that you won't get it to work (at least not directly).
You might be able to side-step that problem by patching the program to open up a REPL and then do whatever it is you want to do from there. (The simplest way to do that is to call debug.debug(). It's really limited (no multiline, no implicit return, crappy error information), but if you need/want something better, something that behaves very much like the normal Lua REPL can be written in ~30 lines or so of Lua.)
I'm trying to internationalize / translate a python app that is implemented as a wx.App(). I have things working for the most part -- I see translations in the right places. But there's a show-stopper bug: crashing at hard-to-predict times with errors like:
Traceback: ...
self.SetStatusText(_('text to be translated here'))
TypeError: 'numpy.ndarray' object is not callable
I suspect that one or more of the app's dependencies (there are quite a few) is clobbering the global translation function, _( ). One likely way would be doing so by using _ as the name of a dummy var when unpacking a tuple (which is fairly widespread practice). I made sure its not my app that is doing this, so I suspect its a dependency that is. Is there some way to "defend" against this, or otherwise deal with the issue?
I suspect this is a common situation, and so people have worked out how to handle it properly. Otherwise, I'll go with something like using a nonstandard name, such as _translate, instead of _. I think this would work, but be more verbose and a little harder to read., e.e.,
From the above I can not see what is going wrong.
Don't have issues with I18N in my wxPython application I do use matplotlib and numpy in it (not extensive).
Can you give the full traceback and/or a small runnable sample which shows the problem.
BTW, have you seen this page in the wxPython Phoenix doc which gives some other references at the end.
wxpython.org/Phoenix/docs/html/internationalization.html
Aha, if Translate works then you run into the issue of Python stealing "", you can workaround that by doing this:
Install a custom displayhook to keep Python from setting the global _ (underscore) to the value of the last evaluated expression. If we don't do this, our mapping of _ to gettext can get overwritten. This is useful/needed in interactive debugging with PyShell.
you do this by defining in your App module:
def _displayHook(obj):
"""Custom display hook to prevent Python stealing '_'."""
if obj is not None:
print repr(obj)
and then in your wx.App.OnInit method do:
# work around for Python stealing "_"
sys.displayhook = _displayHook
Is it possible to check if a lua script contains errors without executing it? I have fallowing code:
if(luaL_loadbuffer(L, data, size, name))
{
fprintf (stderr, "%s", lua_tostring (L, -1));
lua_pop (L, 1);
}
if(lua_pcall(L, 0, 0, 0))
{
fprintf (stderr, "%s", lua_tostring (L, -1));
lua_pop (L, 1);
}
But if the script contains errors it passes first if and it is executed. I want to know if it contains errors when I load it, not when I execute it. Is this possible?
You can use the LUA Compiler. It will only compile your file to bytecode without executing it.
Your program will also have the advantage the run faster if it is compiled.
You can even use the -p option to only perform a syntax checking, according to the linked man page :
-p load files but do not generate any output file. Used mainly for syntax checking or testing precompiled chunks: corrupted files will probably generate errors when loaded. For a thourough integrity test, use -t.
(This was originally meant as a reply to the first comment to Krtek's question, but I ran out of space there and to be honest it works as an answer just fine.)
Functions are essentially values, and thus a named function is actually a variable of that name. Variables, by their very definition, can change as a script is executed. Hell, someone might accidentally redefine one of those functions. Is that bad? To sum my thoughts up: depending on the script, parameters passed and/or actual implementations of those pre-defined functions you speak of (one might unset itself or others, for example), it is not possible to guarantee things work unless you are willing to narrow down some of your demands. Lua is too dynamic for what you are looking for. :)
If you want a flawless test: create a dummy environment with all bells and whistles in place, and see if it crashes anywhere along the way (loading, executing, etc). This is basically a sort of unit test, and as such would be pretty heavy.
If you want a basic check to see if a script has a valid syntax: Krtek gave an answer for that already. I am quite sure (but not 100%) that the lua equivalent is to loadfile or loadstring, and the respective C equivalent is to try and lua_load() the code, each of which convert readable script to bytecode which you would already need to do before you could actually execute the code in your normal all-is-well usecase. (And if that contained function definitions, those would need to be executed later on for the code inside those to execute.)
However, these are the extent of your options with regards to pre-empting errors before they actually happen. Lua is a very dynamic language, and what is a great strength also makes for a weakness when you want to prove correctness. There are simply too many variables involved for a perfect solution.
In general it is not possible, as Lua is a dynamic language, and most of errors happen in runtime.
If you want to check for syntax errors, use luac -p option. I use it as a part of my pre-commit hook, for example.
Other common errors are triggering by misusing the global variables. You may analyze output of luac -l to catch these cases. See here: http://lua-users.org/wiki/DetectingUndefinedVariables.
If you want something more advanced, there are several more-or-less functional static analysis tools for Lua code. Start with LuaInspect.
In any case, you are advised to write unit tests instead of just relying on static code checks. Less pain, more gain.
I'm using a closed-source application that loads Lua scripts and allows some customization through modifying these scripts. Unfortunately that application is not very good at generating useful log output (all I get is 'script failed') if something goes wrong in one of the Lua scripts.
I realize that dynamic languages are pretty much resistant to static code analysis in the way C++ code can be analyzed for example.
I was hoping though, there would be a tool that runs through a Lua script and e.g. warns about variables that have not been defined in the context of a particular script.
Essentially what I'm looking for is a tool that for a script:
local a
print b
would output:
warning: script.lua(1): local 'a' is not used'
warning: script.lua(2): 'b' may not be defined'
It can only really be warnings for most things but that would still be useful! Does such a tool exist? Or maybe a Lua IDE with a feature like that build in?
Thanks, Chris
Automated static code analysis for Lua is not an easy task in general. However, for a limited set of practical problems it is quite doable.
Quick googling for "lua lint" yields these two tools: lua-checker and Lua lint.
You may want to roll your own tool for your specific needs however.
Metalua is one of the most powerful tools for static Lua code analysis. For example, please see metalint, the tool for global variable usage analysis.
Please do not hesitate to post your question on Metalua mailing list. People there are usually very helpful.
There is also lua-inspect, which is based on metalua that was already mentioned. I've integrated it into ZeroBrane Studio IDE, which generates an output very similar to what you'd expect. See this SO answer for details: https://stackoverflow.com/a/11789348/1442917.
For checking globals, see this lua-l posting. Checking locals is harder.
You need to find a parser for lua (should be available as open source) and use it to parse the script into a proper AST tree. Use that tree and a simple variable visibility tracker to find out when a variable is or isn't defined.
Usually the scoping rules are simple:
start with the top AST node and an empty scope
item look at the child statements for that node. Every variable declaration should be added in the current scope.
if a new scope is starting (for example via a { operator) create a new variable scope inheriting the variables in the current scope).
when a scope is ending (for example via } ) remove the current child variable scope and return to the parent.
Iterate carefully.
This will provide you with what variables are visible where inside the AST. You can use this information and if you also inspect the expressions AST nodes (read/write of variables) you can find out your information.
I just started using luacheck and it is excellent!
The first release was from 2015.