Removing nonexistent path variables automatically - path

I've accumulated a lot of environment variables in my user and system Path. I'm sure some of them don't even exist anymore, so I'm going to check one by one. But is there an automatic way to do it?

There is no native Windows function to perform such a purge.
You would need to make a script which would:
split the %PATH% as described in "How can I use a .bat file to remove specific tokens from the PATH environment variable?"
build a string for each existing folder
setx PATH=<new string>

Related

Custom environment variable

Is it possible to set a custom environment variable, which will be accessible from any other plugin, the same way as $platform and $path works?
There is a package EnvironmentSettings by Daniele Niero, but it seems my task is simpler and therefore there is a probablity that there is no need to dive deep into its code.
In Sublime, any plugin can modify the global process environment through os.environ from the Python run time. All plugin code runs under the same process, so once one plugin sets an environment variable, any other plugin could access it. I would imagine that this is how the package that you linked to in your question modifies the environment.
A simple example of this in action can be found in Default/exec.py which you can open by using View Package File from the command palette. In the __init__ method of AsyncProcess() there is code that modifies the Sublime process environment if you pass the path argument in your sublime-build file.
A simple example that you can run from the Sublime console would be the following snippet. Once you execute that code, any plugin that you create can access os.environ["MY_VARIABLE"] to see the value.
import os
os.environ["MY_VARIBLE"]="Some Value"
With that said, in Sublime $platform is not an environment variable, it's a special variable that Sublime knows how to expand itself which is divorced from the system environment outlined above.
A complete list of such variables can be viewed by executing the following code from the Sublime console:
from pprint import pprint
pprint(window.extract_variables())
The list of variables you get and their content depends on application state (platform, whether there is currently a project open in the window, the current file, etc).
The names of the variables that this returns are hard coded in the Sublime core and can't be augmented, so if you wanted extra variables here you would need to communicate that to other plugins and they would have to be modified to know how to use them.
From the sounds of what you're trying to accomplish in the comments on your question, what you want might be a sublime-settings file that contains a setting that specifies the directory to use for file actions in your custom plugins. If they all load the settings file to get the path you can modify the location in the config and have it take effect immediately. Alternatively you could do something like a top level module variable in one plugin and import it into the others.

How to autoload environment variables specific to one file path?

I am working on developing a solution that simplifies hands-on debugging of failed Jenkins builds. This involves SSH-ing to the right Jenkins node and going directly on the WORKSPACE so you can interactively try different changes that could solve your problem.
While I solved the problem of starting a SSH session in the right directory there is one missing bit: your shell is missing the original environment variables defined by Jenkins, and these are critical for running any commands after that. So, not the first command of the build is a set > .envrc which saves all into this shell file.
My example refers to the direnv tool which is able to auto-load .envrc files. Due to security concerns this tool does not auto-load these files and gives a message direnv: error .envrc is blocked. Rundirenv allowto approve its content.
So my current solution is to manually run direnv allow after ending up in the right folder.
How can I automate this, so I would not have to type this? A prompting could be ok because it would involve only pressing one key instead of typing ~12.
Please note that I am not forced to use direnv itself, I am open to other solution.
As of v2.15.0, you can now use direnv's whitelist configuration to achieve what you described:
Specifying whitelist directives marks specific directory hierarchies
or specific directories as "trusted" -- direnv will evaluate any
matching .envrc files regardless of whether they have been
specifically allowed. This feature should be used with great care, as
anyone with the ability to write files to that directory (including
collaborators on VCS repositories) will be able to execute arbitrary
code on your computer.
For example, say that the directory hierarchy that contains the .envrcs you want to be evaluated without having to run direnv allow is located under /home/foo/bar.
Create the file /home/foo/.config/direnv/config.toml so that it contains the following:
[whitelist]
prefix = [ "/home/foo/bar" ]
Alternatively, if there are a fixed list of specific paths you want to whitelist, you can use exact rather than prefix:
[whitelist]
exact = [ "/home/foo/projectA", "/home/foo/projectB" ]

Appending to $PATH vs using aliases: Which is better?

In at least some cases, aliases and adding $PATH locations can be used interchangeably. For example, looking at the python tool couchapp, I need to either alias the executable (as helpfully described here) or make the executable available via $PATH.
These are the two lines that can achieve this:
alias couchapp="~/Library/Python/2.7/bin/couchapp"
OR
export PATH=$PATH:~/Library/Python/2.7/bin/
Is there a very definite 'better' option of these two? Why or why not?
An alias is a shell feature: any environment that invokes utilities directly, without involving a shell will not see aliases.
Note: Even when calling shell commands from languages such as Python (using, e.g., os.system()), user-specific shell initialization files are typically not called, so user-specific aliases still won't be visible.
A directory added to the $PATH environment variable is respected by any process that tries to invoke an executable by mere filename, whether via a shell or not.
Similarly, this assumes that any calling process sees the $PATH environment-variable additions of interest, so additions made by the user-specific initialization files are typically not seen, unless the calling process was launched from an interactive shell.
Lookup cost
If you know that a shell will be involved in invoking your utility, you can keep overhead down by defining aliases that invoke your executables by their full path.
Of course, you need to do this for each executable you want to be callable by name only.
By contrast, adding directories to the $PATH variable potentially increases the overhead of locating a given executable by mere filename, because all directories listed must be searched one by one until one containing an executable by the specified name is found (if any).
Precedence
If a shell is involved, aliases take precedence over $PATH lookups.
Of course, later alias definitions can override earlier ones.
If no shell is involved or no alias by a given name exists, $PATH lookups happen in the order in which the directories are listed in the variable.
As your example shows, $PATH allows you to do one line for all of your executables in that location. For that reason I use the latter option. You can also chain many $PATH statements together, allowing you to easily add many more locations to your "executables" from the command line.
If for some reason you do not want to make all of the executables available alias would be better.

How to upload a generic file into a Jenkins job?

I am trying to find a way to prompt the user to select and upload a generic file from a local machine to a Jenkins job prior to build. The input file that user is going to upload is not necessarily a text or a property file.
I am specifically trying to get the user to "select" their desired file - browse to their file ; the user should not pass the file's path.
Thanks
Use the File Parameter:
File parameter allows a build to accept a file, to be submitted by the user when scheduling a new build. The file will be placed inside the workspace at the known location after the check-out/update is done, so that your build scripts can use this file.
If you need to verify the file has a certain extension, you would have to do that with a script as part of your job, and fail the job is extension/content-type does not match what you need.
This is kind of annoying to handle when you don't know what the file name will be or need to change its name before it reaches its destination. You kind of need to perform a hack. This is how I do it:
Use the "File parameter" parameter to upload your file
Use the OS-specific script to rename the file from whatever you named your File Parameter to whatever you want it to be, e.g., if my File Parameter had the File location value of file_name instead of an actual relative file-path, I'd then do something like this for say, Windows inside a Build-Step for "Execute Windows Batch Command":
move .\file_name .\%file_name%
And then just use ArtifactDeployer to copy everything there to your desired location.
ps: this won't remove digital signatures, so the move-operation should be considered mostly safe.
The use of the Jenkins File Parameter will not work for Jenkins pipelines. It's ridiculous that they don't disable that kind of build parameter for pipelines. It's even more ridiculous that they don't at the very least, identify this SEVERE limitation in the help documentation for that parameter.
It would have saved me a couple hours trying to figure out why it would not work in my pipeline.
Refer to this feature request for more details: https://issues.jenkins-ci.org/browse/JENKINS-27413

aliasing jenkins artifact URLs

Jenkins artifact URLs allow abstracting the "last successful build", so that instead of
http://myjenkins.local/job/MyJob/38/artifact/build/MyJob-v1.0.1.zip
we can say
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-v1.0.1.zip
Is it possible to abstract this further? My artifacts have their version number in their filename, which can change from build to build. Ideally I'd like to have a some kind of "alias" URL that looks like this:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-latest.zip
MyJob-latest.zip would then resolve to MyJob-v1.0.1.zip.
If Jenkins itself can't do this, perhaps there's a plugin?
Never seen any such plugin, but Jenkins already has a similar functionality built-in.
You can use /*zip*/filename.zip in your artifact path, where filename is anything you choose. It will take all found artifacts, and download them in a zipfile (you may end up with a zip inside a zip, if your artifact is already a zip file)
In your case, it will be:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/*zip*/MyJob-latest.zip
This will get you the contents of /artifact/build/ returned in zipped archive with name MyJob-latest.zip. Note that if you have more than just that zip file in that directory, other files will be returned too.
You can use wildcards in the path. A single * for a regular wildcard, a double ** for skipping any number of preceding directories.
For example, to get any file that starts with MyJob, ends with .zip, and to look for it in any artifact directory, you could use:
/lastSuccessfulBuild/artifact/**/MyJob*.zip/*zip*/MyJob-latest.zip
Edit:
You cannot do something like this without some form of a container (a zip in this case). With the container, you are telling the system:
Get any possible [undetermined count] wildcard match and place into this container, then give me the container. This is logical and possible, as there is only one single container, whether it is empty or not.
But you cannot tell the system:
Give me a link to a specific single file, but I don't know which one or how many there are. The system can't guarantee that your wildcards will match one, more than one, or none. This is simply impossible from a logic perspective.
If you need it for some script automation, you can unzip the first level zip and be still left with your desired zipped artifact.
If you need to provide this link to someone else, you need an alternative solution.
Alternative 1:
After your build is complete, execute a post-build step that will take your artifact, and rename it to MyJob-latest.zip, but you are losing versioning in the filename. You can also chose to copy instead of rename, but you end up with double the space used for storing these artifacts.
Alternative 2 (recommended):
As a post-build action, upload the artifact to a central repository. It can be Artifactory, or even plain SVN. When you upload it, it will be renamed MyJob-latest.zip and the previous one would be overwritten. This way you have a static link that will always have the latest artifact from lastSuccessfulBuild
There is actually a plugin to assign aliases to build you've run, and I have found it pretty handy: the Build Alias Setter Plugin.
You can use it for instance to assign an alias in the form of your own version number for a build, instead (or rather in addition) to the internal Jenkins-assigned build number.
I found that it is usually most practical to use it in conjunction with the EnvInject plugin (or your favorite variant): you would export an env variable (e.g. MY_VAR=xyz) with a value to the target version or moniker, and then use the form ${ENV,var="myvar"} in the "Token Macro alias" config that the plugin provides in your job config.
You can also use it to assign aliases in the form of "lastSuccesful" if you have such a need, which allows you to distinguish between different types of successful (or other state) builds.
Wait thee's more! You can also use the /*zip*/ trick in conjunction with the alias setter as well.

Resources