Dynamic variable #*INC not found - electron

So I've been trying to get electron working with Perl6 and looks like after all my efforts of hacking things to get them to work, it just doesn't want to do it's thing. I have used the following script (one of the examples from the electron repo on git):
#!/usr/bin/env perl6
use v6;
use Electron;
my $app = Electron::App.instance;
LEAVE {
$app.destroy if $app.defined;
}
say Electron::Dialog.show-open-dialog.perl;
say Electron::Dialog.show-save-dialog.perl;
say Electron::Dialog.show-message-box.perl;
Electron::Dialog.show-error-box("Text", "Content");
prompt("Press any key to exit");
On Running I get this error:
Dynamic variable #*INC not found
in submethod initialize at C:\rakudo\share\perl6\site\sources\42D84B59BC3C5A414EA59CC2E3BC466BBAF78CDA line 54
in method instance at C:\rakudo\share\perl6\site\sources\42D84B59BC3C5A414EA59CC2E3BC466BBAF78CDA line 33
in block <unit> at test.p6 line 9
Actually thrown at:
in method throw at C:\rakudo/share/perl6/runtime/CORE.setting.moarvm line 1
in block at C:\rakudo\share\perl6\site\sources\42D84B59BC3C5A414EA59CC2E3BC466BBAF78CDA line 55
in submethod initialize at C:\rakudo\share\perl6\site\sources\42D84B59BC3C5A414EA59CC2E3BC466BBAF78CDA line 48
in method instance at C:\rakudo\share\perl6\site\sources\42D84B59BC3C5A414EA59CC2E3BC466BBAF78CDA line 33
in block <unit> at test.p6 line 9
And after looking at the submethod i noticed that this was part of the electron module for perl6 and it seems to not like the use of #*INC within the module.
Has anyone managed to successfully use the electron module with Perl6? Has anyone else come across this error? Is there an easy way around it?
I can probably modify the module to get it to compile and run but I wouldn't know where to start with replacing the #*INC.

$*REPO is the 6.c replacement for #INC in Perl 5
In Perl 5 the #INC variable is a global array of paths to be searched when Perl is looking for modules (analogous to the PATH variable used by many OSes to contain the paths to be searched when that OS is looking for programs).
Until recently Perl 6 had a corresponding #*INC variable.
Having an array for this turned out to be inappropriate for 6.c given concurrent module loading and advanced module selection features introduced by the Perl 6 module repository mechanism.
About a month or two before 6.c a lead dev (Stefan Seifert aka nine) switched module loading to use a chained repo approach via a new $*REPO scalar and obsoleted the include array.
For various reasons they did this without a deprecation period.
Any pre 6.c modules that directly mention #*INC need an update and some haven't yet gotten that update. The Electron module was one such -- until you filed an issue (thanks!) and the module's author responded by fixing it.
I'm not aware of any "official" design or enduser documentation of $*REPO. The best info is probably to be found by asking user nine on the freenode IRC channel #perl6-toolchain (logs; join).

Related

Why do I get different runtimepaths depending on which API I use?

I'm trying to run Neovim 0.8.1. on a Windows 11 environment.
My setup is really minimal:
I downloaded nvim-win64.zip (of version 0.8.1) from Neovim's releases page on Github.
Extracted it and moved it to a folder at C:\test\nvim-win64
Started up Neovim by executing C:\test\nvim-win64\bin\nvim.exe
Not using any custom config.
When inspecting my runtimepath, there are 2 ways of doing this:
Using :set runtimepath? (the Vimscript way). This gives me:
runtimepath=~\AppData\Local\nvim,~\AppData\Local\nvim-data\site,C:\test\nvim-win64\share\nvim\runtime,C:\test\nvim-win64\share\nvim\runtime\pack\dist\opt\matchit,C:\test\nvim-win64\lib\nvim,~\AppData\Local\nvim-data\site\after,~\AppData\Local\nvim\after
Using :lua print(vim.inspect(vim.api.nvim_list_runtime_paths())) (the Lua way). this gives me:
{ "C:\\test\\nvim-win64\\share\\nvim\\runtime", "C:\\test\\nvim-win64\\share\\nvim\\runtime\\pack\\dist\\opt\\matchit", "C:\\test\\nvim-win64\\lib\\nvim" }
As you can see, it seems like using the Lua way I'm missing the local config directories in my runtimepath (the ~\AppData\Local\* paths).
Why am I seeing this difference? This is blocking me from using XDG_CONFIG_HOME to use my own config that I typically use, because it seems like it does not get included in the nvim_list_runtime_paths list, but it does appear in :set runtimepath?.
Nvim api function filters out non-existent directories. So there's a difference.
My issue was that my employer had decided to put ( and ) characters in my %USERPROFILE% environment variable, which ended up breaking a bunch of stuff (including the list I got from nvim_list_runtime_paths).
Putting those characters in %USERPROFILE% is a bad idea for many reasons, so I moved all of my files and folders out of any (sub)directory in %USERPROFILE% and right in C:\.
I also had to define XDG_CONFIG_HOME, XDG_DATA_HOME and XDG_STATE_HOME to point to a different location than the default location (which default within %USERPROFILE%).
This made all of my troubles go away!

With Jenkins Job Builder (JJB) what's the preferred way to inject values into a static set of job configuration files?

This bounty has ended. Answers to this question are eligible for a +100 reputation bounty. Bounty grace period ends in 14 hours.
frans is looking for an answer from a reputable source.
I have a working set of JJB YAML files successfully creating jobs and folders.
I now want to make certain values I use inside those YAML files configurable i.e. when running jenkins-jobs test|update -r jobfolder I want to set values for folder prefixes (to not damage existing production jobs), names for branches, nodes etc.
I don't want to use JJBs defaults approach for this since I'm already using it for configuration at a different place and it results in conflicts when used in projects and jobs together.
The ideal way of doing this I can think of would be a way to call JJB like this
jenkins-jobs test|update --define "folder-prefix=experimental/,node=test-node" -r jobfolder
Giving me variables I can use in the actual job definition files.
Since this option seemingly doesn't exist, I'm currently trying to provide files which contain those variables and somehow 'inject' them in my project.
Those are the approaches I can think of:
1 - having different configuration folders with YAML files inside, I would use like this:
jenkins-jobs test -r experimental-config:jobfolder
jenkins-jobs test -r production-config:jobfolder
with experimental-config and production-config being folders with additional files containing my configuration I can switch between.
But unfortunately I don't know how I would reference values I've defined in different yaml files. Is that even possible?
2 - having include files like described in the documentation
While that sounds promising I didn't manage to actually make this run. I tried to turn the following 'configuration header' I'm already using:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: "experimental/"
node: "test-node"
[Rest of the file making use of dynamic-config]
into something making use of the !include statement like this:
!include: dynamic-config.yaml.inc
[Rest of the file making use of stuff defined in dynamic-config.yaml.inc]
giving me a seemingly unrelated parser error:
yaml.parser.ParserError: expected '<document start>', but found '<block sequence start>'
in "/home/me/my/project.yml", line 11, column 1
so I tried this snippet, which looks more like the example by putting it inside an existing element:
- dynamic-config: &dynamic-config
name: "dynamic-config"
!include: dynamic-config.yaml.inc
giving me a different error but still an error:
yaml.scanner.ScannerError: while scanning a simple key
in "/home/me/my/project.yml", line 7, column 5
could not find expected ':'
in "/home/me/my/project.yml", line 8, column 5
In both cases it doesn't make a difference whether or not the specified include file exists or not, which makes me doubt you can just 'include' a file like this at all.
What am I doing wrong here? Is there a more obvious / straight forward way to customize a jenkins-jobs run?
Update:
I somehow managed to use the !include tag for individual items now, like this:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: !include: job-configs/active/folder-prefix.inc
branch-name: !include: job-configs/active/branch-name.inc
node-name: !include: job-configs/active/node-name.inc
But I wasn't able to put the whole dynamic-config element (with the anchor) into an include file yet.
2nd update:
Looks like I'm trying something similar as the guy from this question.
Can someone confirm, that this is currently still a problem? What's the JJB way of handling this?

How to call a function from a lua file of a c++/lua project on interactive terminal?

I'm reading some source codes of a project, which is a combination of c++ and lua, they are interwined through luabind.
There is a la.lua file, in which there is a function exec(arg). The lua file also uses functions/variables from other lua file, so it has statements as below in the beginning
module(..., package.seeall);
print("Loading "..debug.getinfo(1).source.."...")
require "client_config"
now I want to run la.exec() from interactive terminal(on linux), but I get errors like
attempt to index global 'lg' (a nil value)
if I want to import la.lua, I get
require "la"
Loading #./la.lua...
./la.lua:68: attempt to index global 'ld' (a nil value)
stack traceback:
./lg.lua:68: in main chunk
[C]: in function 'require'
stdin:1: in main chunk
[C]: ?
what can I do?
Well, what could be going wrong?
(Really general guesswork following, there's not much information in what you provided…)
One option is that you're missing dependencies because the files don't properly require all the things they depend on. (If A depends on & requires B and then C, and C depends on B but doesn't require it because it's implicitly loaded by A, directly loading C will fail.) So if you throw some hours at tracking down & fixing dependencies, things might suddenly work.
(However, depending on how the modules are written this may be impossible without a lot of restructuring. As an example, unless you set package.loaded["foo"] to foo's module table in foo before loading submdules, those submodules cannot require"foo". (Luckily, module does that, in newer code without module that's often forgotten – and then you'll get an endless loop (until the stack overflows) of foo loading other modules which load foo which loads other modules which …) Further, while "fixing" things so they load in the interpreter you might accidentally break the load order used by the program/library under normal operation which you won't notice until you try to run that one normally again. So it may simply cost too much time to fix dependencies. You might still be able to track down enough to construct a long lua -lfoo-lbar… one-off dependency list which might get things to run, but don't depend on it.)
Another option is that there are missing parts provided by C(++) modules. If these are written in the style of a Lua library (i.e. they have luaopen_FOO), they might load in the interpreter. (IIRC that's unlikely for C++ because it expects the main program to be C++-aware but lua is (usually? always?) plain C.) It's also possible that these modules don't work that way and need to be loaded in some other way. Yet another possibility might be that the main program pre-defines things in the Lua state(s) that it creates, which means that there is no module that you could load to get those things.
While there are some more variations on the above, these should be all of the general categories. If you suspect that your problem is the first one (merely missing dependency information), maybe throw some more time at this as you have a pretty good chance of getting it to work. If you suspect it's one of the latter two, there's a very high chance that you won't get it to work (at least not directly).
You might be able to side-step that problem by patching the program to open up a REPL and then do whatever it is you want to do from there. (The simplest way to do that is to call debug.debug(). It's really limited (no multiline, no implicit return, crappy error information), but if you need/want something better, something that behaves very much like the normal Lua REPL can be written in ~30 lines or so of Lua.)

how to avoid dependency name-conflicts with global translation function _( ) in python?

I'm trying to internationalize / translate a python app that is implemented as a wx.App(). I have things working for the most part -- I see translations in the right places. But there's a show-stopper bug: crashing at hard-to-predict times with errors like:
Traceback: ...
self.SetStatusText(_('text to be translated here'))
TypeError: 'numpy.ndarray' object is not callable
I suspect that one or more of the app's dependencies (there are quite a few) is clobbering the global translation function, _( ). One likely way would be doing so by using _ as the name of a dummy var when unpacking a tuple (which is fairly widespread practice). I made sure its not my app that is doing this, so I suspect its a dependency that is. Is there some way to "defend" against this, or otherwise deal with the issue?
I suspect this is a common situation, and so people have worked out how to handle it properly. Otherwise, I'll go with something like using a nonstandard name, such as _translate, instead of _. I think this would work, but be more verbose and a little harder to read., e.e.,
From the above I can not see what is going wrong.
Don't have issues with I18N in my wxPython application I do use matplotlib and numpy in it (not extensive).
Can you give the full traceback and/or a small runnable sample which shows the problem.
BTW, have you seen this page in the wxPython Phoenix doc which gives some other references at the end.
wxpython.org/Phoenix/docs/html/internationalization.html
Aha, if Translate works then you run into the issue of Python stealing "", you can workaround that by doing this:
Install a custom displayhook to keep Python from setting the global _ (underscore) to the value of the last evaluated expression. If we don't do this, our mapping of _ to gettext can get overwritten. This is useful/needed in interactive debugging with PyShell.
you do this by defining in your App module:
def _displayHook(obj):
"""Custom display hook to prevent Python stealing '_'."""
if obj is not None:
print repr(obj)
and then in your wx.App.OnInit method do:
# work around for Python stealing "_"
sys.displayhook = _displayHook

statically analysing Lua code for potential errors

I'm using a closed-source application that loads Lua scripts and allows some customization through modifying these scripts. Unfortunately that application is not very good at generating useful log output (all I get is 'script failed') if something goes wrong in one of the Lua scripts.
I realize that dynamic languages are pretty much resistant to static code analysis in the way C++ code can be analyzed for example.
I was hoping though, there would be a tool that runs through a Lua script and e.g. warns about variables that have not been defined in the context of a particular script.
Essentially what I'm looking for is a tool that for a script:
local a
print b
would output:
warning: script.lua(1): local 'a' is not used'
warning: script.lua(2): 'b' may not be defined'
It can only really be warnings for most things but that would still be useful! Does such a tool exist? Or maybe a Lua IDE with a feature like that build in?
Thanks, Chris
Automated static code analysis for Lua is not an easy task in general. However, for a limited set of practical problems it is quite doable.
Quick googling for "lua lint" yields these two tools: lua-checker and Lua lint.
You may want to roll your own tool for your specific needs however.
Metalua is one of the most powerful tools for static Lua code analysis. For example, please see metalint, the tool for global variable usage analysis.
Please do not hesitate to post your question on Metalua mailing list. People there are usually very helpful.
There is also lua-inspect, which is based on metalua that was already mentioned. I've integrated it into ZeroBrane Studio IDE, which generates an output very similar to what you'd expect. See this SO answer for details: https://stackoverflow.com/a/11789348/1442917.
For checking globals, see this lua-l posting. Checking locals is harder.
You need to find a parser for lua (should be available as open source) and use it to parse the script into a proper AST tree. Use that tree and a simple variable visibility tracker to find out when a variable is or isn't defined.
Usually the scoping rules are simple:
start with the top AST node and an empty scope
item look at the child statements for that node. Every variable declaration should be added in the current scope.
if a new scope is starting (for example via a { operator) create a new variable scope inheriting the variables in the current scope).
when a scope is ending (for example via } ) remove the current child variable scope and return to the parent.
Iterate carefully.
This will provide you with what variables are visible where inside the AST. You can use this information and if you also inspect the expressions AST nodes (read/write of variables) you can find out your information.
I just started using luacheck and it is excellent!
The first release was from 2015.

Resources