I mean to tell in Octave if a given environment variable exists.
Something that works in most cases is size(getenv(varname))(1).
If this equals to 1, the variable exists.
But if this equals 0, the variable may not exist or it may exist and be set to a null value.
How can I distinguish these two cases?
Can this be done "natively" in Octave?
I would like my script to work irrespective of the host OS and shell.
In POSIX one could issue a system call and read the results.
Then for Windows one would have to check in another way, and write a wrapper that deals with both cases separately.
EDIT (tl;dr)
I was using a defined/not-defined environment variable to handle compilation cases, via a combination of
if(DEFINED ENV{ENV_TEST})
add_definitions(-DENV_TEST=$ENV{ENV_TEST})
endif()
in CMakeLists.txt and
#ifdef ENV_TEST
cout << "ENV_TEST is defined and set to \'" ENV_TEST "\'" << endl;
#else
cout << "ENV_TEST is not defined" << endl;
#endif
in myprog.cc.
This is an MCVE-fication of my actual case, where the three possible states (1. not defined, 2. defined and null, 3. defined and not null) produce different results.
In my actual case, I only needed to distinguish #1 vs. #2 (then the question), but I could turn all my #2 cases into #3 cases so now Octave also knows about it.
The downside is that this was deployed across several computers, so instead of writing the Octave code that would handle this right away, I considered changing handling as described above. I am not sure this will not have side effects...
#sancho.sReinstateMonicaCellio, If you don't have a specific use case then this is either a XY problem (see http://xyproblem.info) or you're wasting people's time by writing a misleading question that boils down to "Shouldn't GNU Octave provide a way to determine if an env var is defined?" See https://octave.org/doc/v4.0.1/Environment-Variables.html.
I can't speak to why the GNU Octave developers did not provide a mechanism for determining if an env var is not defined. But I'll bet it has a lot to do with the general design of the language.
You wrote "I would like my script to work irrespective of the host OS and shell.". The problem is that your question is independent of the host OS and shell. The problem is that GNU Octave apparently does not provide an API for distinguishing whether an env var is unset or set to an empty string. That has nothing to do with the OS or shell from which you launched GNU Octave.
Related
I’m using lua + luajit 2.0.4 and I’m wondering - Is it possible to restore the original parts of the code from the dumps of lua functions?
function a(l)
if l > 3 then
print(l*l)
end
end
local b = string.dump(a)
In this example, I am doing the string.dump of the 'a' function, and here I come to the questions like:
Is it possible to write this dump into a .txt file?
Is it possible to get the original names of functions, variables, and upvalues?
Is it possible to get strings, numbers, tables?
Is it possible to restore it to the full code, and if not, is it possible to get a disassembled listing?
"Yes" to all questions with a couple of caveats. For (1), make sure that "b" is used as part of the "mode" parameter in io.open on Windows, as the output of string.dump will have some binary content. For (2), it's only true when string.dump is used without the strip option, which was added in LuaJIT:
string.dump(f [,strip])
An extra argument has been added to string.dump(). If set to true,
'stripped' bytecode without debug information is generated. This
speeds up later bytecode loading and reduces memory usage.
For (4), I found this document to be very useful: http://files.catwell.info/misc/mirror/lua-5.2-bytecode-vm-dirk-laurie/lua52vm.html (it's for Lua 5.2, but most of the content applies to LuaJIT as well); it also include a section on the difference between full and stripped bytecode that may answer some of your questions.
I'm trying to understand config_setting for detecting the underlying platform and had some doubts. Could you help me clarify them?
What is the difference between x64_windows and x64_windows_(msvc|msys) cpus? If I create config_setting's for all of them, will only one of them trigger? Should I just ignore x64_windows?
To detect Windows, what is the recommended way? Currently I'm doing:
config_setting(
name = "windows",
values = {"crosstool_top": "//crosstools/windows"},
)
config_setting(
name = "windows_msvc",
values = {
"crosstool_top": "//crosstools/windows",
"cpu": "x64_windows_msvc",
},
)
config_setting(
name = "windows_msys",
values = {
"crosstool_top": "//crosstools/windows",
"cpu": "x64_windows_msys",
},
)
By using this I want to use :windows to match all Windows versions and :windows_msvc, for example, to match only MSVC. Is this the best way to do it?
What is the difference between darwin and darwin_x86_64 cpus? I know they match macOS, but do I need to always specify both when selecting something for macOS? If not, is there a better way to detect macOS with only one config_setting? Like using //crosstools with Windows?
How do detect Linux? I know you can detect the operating systems you care about first and then use //conditions:default, but it'd be nice to have a way to detect specifically Linux and not leave it as the default.
What are k8, piii, etc? Is there any documentation somewhere describing all the possible cpu values and what they mean?
If I wanted to use //crosstools to detect each platform, is there somewhere I can look up all available crosstools?
Thanks!
Great questions, all. Let me tackle them one by one:
--cpu=x64_windows_msys triggers the C++ toolchain that relies on MSYS/Cygwin. --cpu=x64_windows_msvc triggers the Windows-native (MSVC) toolchain. -cpu=x64_windows triggers the default, which is still MSYS but being converted to MSVC.
Which ones you want to support is up to you, but it's probably safest to support all for generality (and if one is just an alias for the other it doesn't require very complicated logic).
Only one config_setting can trigger at a time.
Unless you'e using a custom -crosstool_top= flag to specify Windows builds, you'll probably want to trigger on --cpu, e.g:
config_setting(
name = "windows",
values = {"cpu": "x64_windows"}
There's not a great way now to define all Windows. This is a current deficiency in Bazel's ability to recognize platforms, which settings like --cpu and --crosstool_top don't quite model the right way. Ongoing work to create a first-class concept of platform will provide the best solution to what you want. But for now --cpu is probably your best option.
This would basically be the same story as Windows. But to my knowledge there's only darwin for default crosstools, no darwin_x86_64.
For the time being it's probably best to use the //conditions:default approach you'd rather not do. Once first-class platforms are available that'll give you the fidelity you want.
k8 and piii are pseudonyms for 86 64-bit and 32-bit CPUs, respectively. They also tend to be associated with "Linux" by convention, although this is not a guaranteed 1-1 match.
There is no definitive set of "all possible CPU values". Basically, --cpu is just a string that gets resolved in CROSSTOOL files to toolchains with identifiers that match that string. This allows you to write new CROSSTOOL files for new CPU types you want to encode yourself. So the exact set of available CPUs depends on who's using Bazel and how they have their workspace set up.
For the same reasons as 5., there is no definitive list. See Bazel's github tools/ directory for references to defaults.
I'm re-building a Lua to ES3 transpiler (a tool for converting Lua to cross-browser JavaScript). Before I start to spend my ideas on this transpiler, I want to ask if it's possible to convert Lua labels to ECMAScript 3. For example:
goto label;
:: label ::
print "skipped";
My first idea was to separate each body of statements in parts, e.g, when there's a label, its next statements must be stored as a entire next part:
some body
label (& statements)
other label (& statements)
and so on. Every statement that has a body (or the program chunk) gets a list of parts like this. Each part of a label should have its name stored in somewhere (e.g, in its own part object, inside a property).
Each part would be a function or would store a function on itself to be executed sequentially in relation to the others.
A goto statement would lookup its specific label to run its statement and invoke a ES return statement to stop the current statements execution.
The limitations of separating the body statements in this way is to access the variables and functions defined in different parts... So, is there a idea or answer for this? Is it impossible to have stable labels if converting them to ECMAScript?
I can't quite follow your idea, but it seems someone already solved the problem: JavaScript allows labelled continues, which, combined with dummy while loops, permit emulating goto within a function. (And unless I forgot something, that should be all you need for Lua.)
Compare pages 72-74 of the ECMAScript spec ed. #3 of 2000-03-24 to see that it should work in ES3, or just look at e.g. this answer to a question about goto in JS. As usual on the 'net, the URLs referenced there are dead but you can get summerofgoto.com [archived] at the awesome Internet Archive. (Outgoing GitHub link is also dead, but the scripts are also archived: parseScripts.js, goto.min.js or goto.js.)
I hope that's enough to get things running, good luck!
I'm working on some legacy code in which there are a lot of WriteLn(F, '...') commands scattered pretty much all over the place. There is some useful information in these commands (what information variables contain, etc), so I'd prefer not to delete it or comment it out, but I want to stop the program from writing the file.
Is there any way that I can assign the F variable so that anything written to it is ignored? We use the console output, so that's not an option.
Going back a long long time to the good old days of DOS - If you assign 'f' to the device 'nul', then there should be no output.
assign (f, 'nul')
I don't know whether this still works in Windows.
Edit:
You could also assign 'f' to a file - assignfile (f, 'c:\1.txt') - for example.
Opening the null device and letting output go there would probably work. Under DOS, the performance of the NUL device was astonishingly bad IIRC (from what I understand, it wasn't buffered, so the system had to look up NUL in the device table when processing each byte) but I would not be at all surprised if it's improved under newer systems. In any case, that's probably the easiest thing you can do unless you really need to maximize performance. If performance is critical, it might in theory be possible to override the WriteLn function so it does nothing for certain files, but unfortunately I believe it allows syntax forms that were not permissible for any user-defined functions.
Otherwise, I would suggest doing a regular-expression find/replace to comment out the WriteLn statements in a fashion that can be mechanically restored.
I'm working on a large project involving multiple documents typeset in LaTeX. I want to be consistent in my use of symbols, so it might be a nice idea to define a command for every symbol that has a specific meaning throughout the project. Does anyone have any experience with this? Are there issues I should pay attention to?
A little more specific. Say that, throughout the document I would denote something called permability by a script P, would it be an idea to define
\providecommand{\permeability}{\mathscr{P}}
or would this be more like the case "defining a command for $n$"?
A few tips:
Using \providecommand will define that command only if it's not been previously defined. So if you're not getting the results you expected, you may be trying to define a command that's been defined elsewhere.
If you wrap the math in your commands with \ensuremath, it will do the right thing regardless of whether you're in math mode when you issue the command:
\providecommand{\permeability}{\ensuremath{\mathscr{P}}}
Now I can easily use \permeability in text or $\permeability$ in math mode.
Using your own commands allows you to easily change the typographical representation of something later. For instance:
\newcommand{\vect}[1]{\ensuremath{\mathbf{#1}}}
would print \vect{x} as a boldfaced x. If you later decide you prefer arrows above your vectors, you could change the command to:
\newcommand{\vect}[1]{\ensuremath{\vec{#1}}}
I have been doing this for anything that has a specific meaning and is longer than a single symbol, mostly to save typing:
\newcommand{\objId}{\mbox{$\mathit{objId}$}\xspace}
\newcommand{\insOp}[1]{#1\mbox{$^+$}\xspace}
\newcommand{\delOp}[1]{#1\mbox{$^-$}\xspace}
However then I noticed that I stopped making inconsistency errors (objId vs ObjId vs ObjID), so I agree that it is a nice idea.
However I am not sure if it is a good idea in case symbols in the output are, well, single Latin symbols, as in:
\newcommand{\numOfObjs}{$n$}
It is too easy to type a single symbol and forget about it even though a command was defined for it.
EDIT: using your example IMHO it'd be a good idea to define \permeability because it is more than a single P that you have to type in without the command. But it's a close call.