Local variables implementation - environment-variables

I've been using the fish shell for a bit now, and I've recently had a conversation with a coworker over local variables. Apparently, Bash doesn't support local variables, and only uses environment variables to communicate dynamic data between processes. Are local variables also just environment variables, but with something extra? I'm curious as to how fish has created this behavior.

Bash doesn't support local variables
That's not true. Bash (and other shells including dash - it's one of the few POSIX extensions it has) have the local keyword to create local variables. They just default to global, while fish defaults to local.
Also when you say "environment variables" what you mean are "exported" variables, which require an explicit "export" step in posixy shells, and the "-x" or "--export" flag to set in fish.
I e. there are two different things at play here - whether this variable is available just in this function/block/whatever, and not the outside, and whether it is passed on to children, including external processes.
Are local variables also just environment variables, but with something extra?
Non-exported variables are something less. They aren't given to the OS's setenv function, so it doesn't copy them to child processes.
Local variables are removed when the block ends. In practice this can be done nicely by putting them on a stack and "popping" the top.
Note that, in fish at least, these concepts are entirely orthogonal:
You can have local-exported variables (with set -lx), and they'll be passed to external commands and copied to functions (so they get their own local version of it), but removed when the function ends. These are useful to change something temporary - e.g. to set $PATH just for a function, or to override $EDITOR when calling something.
And you can have global-unexported variables, which can be accessed by functions but not external commands. These are useful for shell settings like $fish_function_path, which isn't useful to external tools, or $COLUMN, which might even break external tools if exported (because they start to read it instead of checking the terminal size themselves).

There seems to be some misconceptions here:
bash can have variables that are local to a function: https://www.gnu.org/software/bash/manual/bash.html#index-local
Not every shell (bash/fish/etc) variable is in the environment. This is why the export (bash) and set -x (fish) commands exist.
For two separate processes to share the same variable value, you must pass them via the environment. The environment is the way to expose shell variables to other processes.

Related

How to set Global environment variable to work directory on jenkins

I have jenkins job, where I need to use global environment variable which I have created (user defined) in global env variable.
I want to use this global variable in work directly.
I have attached the screen shot with this question.
like on the work directory I want to use
bin/release/user_defined_variable
Thanks in advance

Removing nonexistent path variables automatically

I've accumulated a lot of environment variables in my user and system Path. I'm sure some of them don't even exist anymore, so I'm going to check one by one. But is there an automatic way to do it?
There is no native Windows function to perform such a purge.
You would need to make a script which would:
split the %PATH% as described in "How can I use a .bat file to remove specific tokens from the PATH environment variable?"
build a string for each existing folder
setx PATH=<new string>

Appending to $PATH vs using aliases: Which is better?

In at least some cases, aliases and adding $PATH locations can be used interchangeably. For example, looking at the python tool couchapp, I need to either alias the executable (as helpfully described here) or make the executable available via $PATH.
These are the two lines that can achieve this:
alias couchapp="~/Library/Python/2.7/bin/couchapp"
OR
export PATH=$PATH:~/Library/Python/2.7/bin/
Is there a very definite 'better' option of these two? Why or why not?
An alias is a shell feature: any environment that invokes utilities directly, without involving a shell will not see aliases.
Note: Even when calling shell commands from languages such as Python (using, e.g., os.system()), user-specific shell initialization files are typically not called, so user-specific aliases still won't be visible.
A directory added to the $PATH environment variable is respected by any process that tries to invoke an executable by mere filename, whether via a shell or not.
Similarly, this assumes that any calling process sees the $PATH environment-variable additions of interest, so additions made by the user-specific initialization files are typically not seen, unless the calling process was launched from an interactive shell.
Lookup cost
If you know that a shell will be involved in invoking your utility, you can keep overhead down by defining aliases that invoke your executables by their full path.
Of course, you need to do this for each executable you want to be callable by name only.
By contrast, adding directories to the $PATH variable potentially increases the overhead of locating a given executable by mere filename, because all directories listed must be searched one by one until one containing an executable by the specified name is found (if any).
Precedence
If a shell is involved, aliases take precedence over $PATH lookups.
Of course, later alias definitions can override earlier ones.
If no shell is involved or no alias by a given name exists, $PATH lookups happen in the order in which the directories are listed in the variable.
As your example shows, $PATH allows you to do one line for all of your executables in that location. For that reason I use the latter option. You can also chain many $PATH statements together, allowing you to easily add many more locations to your "executables" from the command line.
If for some reason you do not want to make all of the executables available alias would be better.

Jenkins global variables

I'm trying to use global variables within Jenkins on Windows to "automagically" retrieve the proper code base from our SCM system, but in each case that I've tried the variable substitution is not happening.
I've set up some global variables, with default values, within "Configure System" and have tried to access them with $VARIABLE, ${VARIABLE} and %VARIABLE% as part of the Branch field for the Surround SCM plugin with no success whatsoever.
I've also installed the Global Variable String Parameter plugin with the same success rate (0%). Using a literal value works just fine, but no type of variable substitution seems to work at all and I'm sure that someone has come upon this before and resolved it.
I've tried searching for something similar to this but nothing really approaches this usage of globals, instead it is normally discussed as a function within an external script, or parameter passed to a batch file, etc.
I've run "set" as the first step and can see that the variable is available, but the substitution is just not happening. If it means I will have to script something, then so be it, as I am trying to make this extremely flexible and as headache free as possible, but that isn't seeming to be the case in this case thus far.
My problem is eerily similar to this post: How are environment variables used in Jenkins with Windows Batch Command?, but again, I'm not looking to script this as it is a MUCH simpler solution to use the variable values directly.
from https://wiki.jenkins-ci.org/display/JENKINS/Surround+SCM+Plugin
Troubleshooting
Please contact Seapine support with questions about the integration or
to report bugs or feature requests.
Set your Jenkins project to be parameterized. Create a string parameter GIT_BRANCH that will be your branch variable (for example).
Under Source Control Management, use your branch variable in the form $GIT_BRANCH
That’s it. When you run your project, you will be prompted to enter a value for your GIT_BRANCH parameter.

Windows Registry Variables vs. Environment Variables?

At first glance this seems like a purely subjective/aesthetic issue, but I'd be interested to hear opinions (especially any technical ones) on whether environment variables or the registry is the preferred place for storing configuration data in a Windows environment.
I can currently only think of the following differences:
Registry settings are persistent across sessions, though I believe that environment variables can also have this property.
It's easier to set environment variables from the command-line vs. using regedit
(Counter-argument: regedit easier for non-command-line apps?)
Environment variables are more common across platforms (?).
I'm also aware that environment variables can be interrogated, modified and set from the registry.
Use environment variables when you intend to be configured by other applications (or by a technical user) and that this configuration could be different (i.e. you have 2 instances running at the same time, with different settings). Cluttering a user's environment isn't usually necessary. In most cases, use the registry, or a config file stored in $HOME\AppData\Roaming\YourApp.
When using windows services, environment variables can be a pain: Just changing the variable and then restarting the service will not help. Usually the system needs to be restarted.
If the service looks up settings in the registry, this is much easier.
I saw this behavior on Windows XP, I'm not sure if the later versions have resolved this issue.

Resources