I am working on developing a solution that simplifies hands-on debugging of failed Jenkins builds. This involves SSH-ing to the right Jenkins node and going directly on the WORKSPACE so you can interactively try different changes that could solve your problem.
While I solved the problem of starting a SSH session in the right directory there is one missing bit: your shell is missing the original environment variables defined by Jenkins, and these are critical for running any commands after that. So, not the first command of the build is a set > .envrc which saves all into this shell file.
My example refers to the direnv tool which is able to auto-load .envrc files. Due to security concerns this tool does not auto-load these files and gives a message direnv: error .envrc is blocked. Rundirenv allowto approve its content.
So my current solution is to manually run direnv allow after ending up in the right folder.
How can I automate this, so I would not have to type this? A prompting could be ok because it would involve only pressing one key instead of typing ~12.
Please note that I am not forced to use direnv itself, I am open to other solution.
As of v2.15.0, you can now use direnv's whitelist configuration to achieve what you described:
Specifying whitelist directives marks specific directory hierarchies
or specific directories as "trusted" -- direnv will evaluate any
matching .envrc files regardless of whether they have been
specifically allowed. This feature should be used with great care, as
anyone with the ability to write files to that directory (including
collaborators on VCS repositories) will be able to execute arbitrary
code on your computer.
For example, say that the directory hierarchy that contains the .envrcs you want to be evaluated without having to run direnv allow is located under /home/foo/bar.
Create the file /home/foo/.config/direnv/config.toml so that it contains the following:
[whitelist]
prefix = [ "/home/foo/bar" ]
Alternatively, if there are a fixed list of specific paths you want to whitelist, you can use exact rather than prefix:
[whitelist]
exact = [ "/home/foo/projectA", "/home/foo/projectB" ]
Related
I've accumulated a lot of environment variables in my user and system Path. I'm sure some of them don't even exist anymore, so I'm going to check one by one. But is there an automatic way to do it?
There is no native Windows function to perform such a purge.
You would need to make a script which would:
split the %PATH% as described in "How can I use a .bat file to remove specific tokens from the PATH environment variable?"
build a string for each existing folder
setx PATH=<new string>
Since you can not delete a workspace or reference tree in AccuRev (only deactivate it), we want to create local copy of a streams contents, without using those.
I could ofcourse use something like accurev hist in combination with accurev cat, but that sounds like an awful workaround for such a basic functionality.
So, I wonder, is there an easy command to do this?
I only want to use this in my Jenkins CI environment to check the sources (compile, run tests, etcetera). I never have to push any changes back to AccuRev, so the AccuRev gurus would probably recommend using a reference tree.
However, I want to create these dynamically and they will only be used once.
It does not seem like a good idea to clutter the AccuRev server with thousands of unused reference trees.
You can use the accurev pop command to do exactly what you want. Within Jenkins, this is the equivalent of using the option of "Neither" a workspace or reference tree if you are using the AccuRev plug-in for Jenkins.
If you prefer to script this yourself, you can use accurev pop -R -v <stream-name> -L <some-directory-location> /./ where you substitute in your stream name and the directory location to which you want to write. The /./ in the command tells AccuRev to populate the depot root directory and -R is to recurse the entire contents below that. You can specify another directory below that level using its depot relative path.
We have a solution stored in TFS that deploys to SharePoint. As part of the solution we have a config file that has a path to a specific site. The problem is this path changes dependent on the users dev machine e.g
<site>devmachine1/somesite</site>
<site>devmachine2/somesite</site>
This can obviously be updated to work locally after a check out however when the file gets checked back in it will be incorrect on the next users machine if they do a Get. Is there a way that the file can be excluded or a script can be run to update the path when checked back in or out?
The best option I'd to rationalist all of the developer workstations.
I would do this by adding an identical entry to the hosts file that hard coded the name of the Sharepoint, allowing you to have the same config file work on every dev machine.
Make it dynamic by having a pre build instruction that adds the host, that way any developer can get and build.
You can use a custom check-in policy to update back the file when is checked-in. See here
Jenkins artifact URLs allow abstracting the "last successful build", so that instead of
http://myjenkins.local/job/MyJob/38/artifact/build/MyJob-v1.0.1.zip
we can say
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-v1.0.1.zip
Is it possible to abstract this further? My artifacts have their version number in their filename, which can change from build to build. Ideally I'd like to have a some kind of "alias" URL that looks like this:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/MyJob-latest.zip
MyJob-latest.zip would then resolve to MyJob-v1.0.1.zip.
If Jenkins itself can't do this, perhaps there's a plugin?
Never seen any such plugin, but Jenkins already has a similar functionality built-in.
You can use /*zip*/filename.zip in your artifact path, where filename is anything you choose. It will take all found artifacts, and download them in a zipfile (you may end up with a zip inside a zip, if your artifact is already a zip file)
In your case, it will be:
http://myjenkins.local/job/MyJob/lastSuccessfulBuild/artifact/build/*zip*/MyJob-latest.zip
This will get you the contents of /artifact/build/ returned in zipped archive with name MyJob-latest.zip. Note that if you have more than just that zip file in that directory, other files will be returned too.
You can use wildcards in the path. A single * for a regular wildcard, a double ** for skipping any number of preceding directories.
For example, to get any file that starts with MyJob, ends with .zip, and to look for it in any artifact directory, you could use:
/lastSuccessfulBuild/artifact/**/MyJob*.zip/*zip*/MyJob-latest.zip
Edit:
You cannot do something like this without some form of a container (a zip in this case). With the container, you are telling the system:
Get any possible [undetermined count] wildcard match and place into this container, then give me the container. This is logical and possible, as there is only one single container, whether it is empty or not.
But you cannot tell the system:
Give me a link to a specific single file, but I don't know which one or how many there are. The system can't guarantee that your wildcards will match one, more than one, or none. This is simply impossible from a logic perspective.
If you need it for some script automation, you can unzip the first level zip and be still left with your desired zipped artifact.
If you need to provide this link to someone else, you need an alternative solution.
Alternative 1:
After your build is complete, execute a post-build step that will take your artifact, and rename it to MyJob-latest.zip, but you are losing versioning in the filename. You can also chose to copy instead of rename, but you end up with double the space used for storing these artifacts.
Alternative 2 (recommended):
As a post-build action, upload the artifact to a central repository. It can be Artifactory, or even plain SVN. When you upload it, it will be renamed MyJob-latest.zip and the previous one would be overwritten. This way you have a static link that will always have the latest artifact from lastSuccessfulBuild
There is actually a plugin to assign aliases to build you've run, and I have found it pretty handy: the Build Alias Setter Plugin.
You can use it for instance to assign an alias in the form of your own version number for a build, instead (or rather in addition) to the internal Jenkins-assigned build number.
I found that it is usually most practical to use it in conjunction with the EnvInject plugin (or your favorite variant): you would export an env variable (e.g. MY_VAR=xyz) with a value to the target version or moniker, and then use the form ${ENV,var="myvar"} in the "Token Macro alias" config that the plugin provides in your job config.
You can also use it to assign aliases in the form of "lastSuccesful" if you have such a need, which allows you to distinguish between different types of successful (or other state) builds.
Wait thee's more! You can also use the /*zip*/ trick in conjunction with the alias setter as well.
let's say I am developing 2 applications for 2 different clients which are, using 2 different database-configurations.
When using Openshift and CakePHP it is considered good practise to not store the real connection-info in the configs, but instead to use environment-variables.
That way the GIT-Repo is also always clean of server-specific stuff.
That is all fine as long as I have ONE project but as soon as there is another one, I need to override my local env-vars according to the current project.
Is there any way to avoid this? Is it possible to set up env-vars on my local machine per directory or something like that?
I am running OSX with Mamp Pro.
This may not be the best solution, but It would work. Create a different user on your local machine and then change to that other user when you need to access those other environment variables.
I create a 'data' directory in my git repo and set it to ignore. This way anything in there will be saved in the repo and sent to openshift. I place a config.ini file with all the info that I don't want in my repo.
I then manually put that config.ini file in Openshift's persistent DATA directory by using winSCP. You only have to do this when you change your config.ini.
When my app runs it detects if it's local or on Openshift and loads the config.ini file from the correct directory.
I would be interested if somebody has a better idea.
HTH