Skipping parts of a Travis job if building from a fork - environment-variables

I've been writing some shell tasks that can't run unless there are secure environment variables present in a Travis CI PR build. There is an auth token that must be present to push some information up about the build, so for builds originating from forks, I'd like to simply skip these parts. They're not critical.
How can I tell if a build is originating from a fork?
From the documentation around "Environment Variables":
TRAVIS_SECURE_ENV_VARS: Whether or not secure environment vars are being used. This value is either "true" or "false".
This is a little ambiguous. Does it mean that secure environment variables are being used anywhere (as in, present in .travis.yml)? That they are being exported as environment variables in the current build? I'm not sure this is a good way to guarantee that I'm testing a pull request that has originated from a fork, but I didn't see any other way to do it.
My first attempted had code that looked something like
[ ${TRAVIS_SECURE_ENV_VARS} = "false" ] && exit 0; # more shell code here...
but this appears to continue ahead and push up without an auth token, failing the task (and the build). Further complicating matters is that should the command fail, the output may contain the auth token...so everything from stderr and stdout is redirected to /dev/null. Considering that builds don't start for several minutes, I'm stuck waiting in a long debug cycle.
My next attempt simply bypassed this built-in environment variable, in favor of instead trying to grab a secure environment variable directly.
[ ${ghToken} -n ] && exit 0;
This fails in the same way as above. I'm beginning to wonder if [ $COND ] && exit 0; really works the way I'm expecting it to in this context. It seems to work just fine when I run equivalent shell scripts locally (Mac OSX and bash).
Does Travis offer a built-in way to determine if the pull request being built originated from the original repository versus a fork?
Here is my current work around.
screenshotsClone: {
command: ['[ ${ghToken} ] &&',
'git submodule add -f', screenshotPullTemplate, 'screenshots > /dev/null 2>&1;'].join(' '),
options: {
stdout: false,
failOnError: false
}
}
I'd rather not silently pass on errors should there be a legitimate problem with the shell task. At this point I might as well remove the environment variable check that precedes it.

Stop using grunt-shell for things in your .travis.yml. Put those shell tasks into their own files so that you can stop using /bin/sh and start using /bin/bash, which is what you've been testing against locally.
Something like this will fix your problems.
Don't forget to mark the shebang as #! /bin/bash, which is crucial for the kinds of checks that should be happening on Travis.

I think what you want is to check if one of your secure environment variables is null to detect that you are running a build from a fork and in that case stop the build script prematurely.
Hence I would suggest you use the -z comparison operator in BASH to detect null strings because the -n operator detects non null string. A non null secure environment variable would mean that you are not running the build from a fork.
My suggestion is to change your line for:
[ -z "${ghToken}" ] && exit 0;
Hope this helps.

Related

Implement 'Entrypoint' like functionality in Cloud Native Buildpack

I have a multi-process web app. The processes are contributed by different buildpacks. The default process will start the web application. I have a use case in which a given shell script should be executed before the default process invocation.
I have tried the following approach;
Create a custom-buildpack
Create a script that needs to be executed and invoke the web process in it.
Create a new process based on the above shell sciprt by specifying it in launch.toml definition
Make the buildpack launchable
The entrypoint.sh
#!/usr/bin/env bash
# Some fancy stuff..
#Invoke the web process
/cnb/process/web
Create lauch.toml from the build script of custom-buildpack. Make the entrypoint process the default one.
cat > "$layers_dir/launch.toml" << EOL
[[processes]]
type = "entrypoint"
command = "bash"
args = ["$scriptlayer/bin/entrypoint.sh"]
default = true
EOL
echo -e '[types]\nlaunch = true' > "$layers_dir/assembly-scripts.toml"
Truncated pack inspect-image output
Processes:
TYPE SHELL COMMAND ARGS
entrypoint (default) bash bash /layers/gw_assembly-scripts/assembly-scripts/bin/entrypoint.sh
task bash catalina.sh run
tomcat bash catalina.sh run
web bash catalina.sh run
Is there any better CNB native approach to achieve this use case?
You have a couple of options here:
The simplest option would be to add a .profile script to the root of your application. It's a bash script, so anything you can write in bash can be done there, however, it's primarily for initializing your app and setting additional env variables.
This file runs prior to the command in your process type. I looked for documentation on this behavior, but only found it briefly mentioned in the buildpacks spec.
As an example, if I put .profile in the root of my application and inside that file, I write echo 'Hello World!'. I'll see Hello World! printed before any of my process types execute.
If you want to create a buildpack, you can achieve something similar to the .profile script by having your buildpack include an exec.d binary.
This is a binary that's part of your launch image and gets run prior to any of your process types. It allows you to take actions to initialize an application and set additional environment variables dynamically before your application starts.
This mechanism is often used by buildpack authors to provide dynamic behavior at runtime based on changes to environment variables or Kubernetes service bindings. For example, turning on/off features like APM tools, debugging, and metrics.
A few other miscellaneous notes.
Neither of the options above allows you to change the actual process type. The process type that will be executed is selected prior to these options (.profile and exec.d) running and you cannot influence that from within. You can only use them to run things prior to the process type running.
The buildpack spec does not allow for a buildpack to modify the process types for another buildpack. So you cannot create a buildpack that wraps or modifies process types set by another buildpack. That said, a buildpack can override the process types set by another buildpack. Buildpacks that are later in the order group will override earlier buildpacks.
From the spec: A combined processes list derived from all launch.toml files such that process types from later buildpacks override identical process types from earlier buildpacks.
With buildpacks, the entrypoint is always the launcher. The launcher is a process that runs and implements the application side of the buildpack specification. It runs .profile, exec.d binaries, sets up buildpack provide environment variables and eventually launch the specified process type.
If you override the entrypoint for a container then the launcher won't run and none of the things it is supposed to do will happen. Sometimes this is desired, like if you're troubleshooting, but usually you want the launcher to be the entrypoint.

Running terragrunt run-all destroy in Jenkins needs user input

I am currently using terragrunt to manage my terraform code, and am running into an error when trying to destroy my infrastructure. I want to be able to spin up development environments (or parts of environments) and delete them easily through Jenkins.
A certain part of my infrastructure is structured so that I need to use the terragrunt run-all command, which results in the following user message: WARNING: Are you sure you want to run `terragrunt destroy` in each folder of the stack described above? There is no undo! (y/n). Jenkins immediately fails after this output as it expects a y/n input.
For apply I have managed to go around this by saving a plan and then applying it, however for the destroy command I can't find another way. terraform commands have an -auto-approve option, but this seems to do nothing to the terragrunt run-all command, despite this being in the documentation:
Using run-all with apply or destroy silently adds the -auto-approve flag to the command line arguments passed to Terraform due to issues with shared stdin making individual approvals impossible.
Does anyone have any experience of this or any advice? Am I misunderstanding the documentation?
Just in case anyone else is looking for an answer to this, the flag --terragrunt-non-interactive can be used.

Conda: what happens when you activate an environment?

How does running source activate <env-name> update the $PATH variable? I've been looking in the CONDA-INSTALLATION/bin/activate script and do not understand how conda updates my $PATH variable to include the bin directory for the recently activated environment. No where can I find the code that conda uses to prepend my $PATH variable.
Disclaimer: I am not a conda developer, and I'm not a Bash expert. The following explanation is based on me tracing through the code, and I hope I got it all right. Also, all of the links below are permalinks to the master commit at the time of writing this answer (7cb5f66). Behavior/lines may change in future commits. Beware: Deep rabbit hole ahead!
Note that this explanation is for the command source activate env-name, but in conda>=4.4, the recommended way to activate an environment is conda activate env-name. I think if one uses conda activate env-name, you should pick up the explanation around the part where we get into the cli.main function.
For conda >=4.4,<4.5, looking at CONDA_INST_DIR/bin/activate, we find on the second to last and last lines (GitHub link):
. "$_CONDA_ROOT/etc/profile.d/conda.sh" || return $?
_conda_activate "$#"
The first line sources the script conda.sh in the $_CONDA_ROOT/etc/profile.d directory, and that script defins the _conda_activate bash function, to which we pass the arguments $# which is basically all of the arguments that we passed to the activate script.
Taking the next step down the rabbit hole, we look at $_CONDA_ROOT/etc/profile.d/conda.sh and find (GitHub link):
_conda_activate() {
# Some code removed...
local ask_conda
ask_conda="$(PS1="$PS1" $_CONDA_EXE shell.posix activate "$#")" || return $?
eval "$ask_conda"
_conda_hashr
}
The key is that line ask_conda=..., and particularly $_CONDA_EXE shell.posix activate "$#". Here, we are running the conda executable with the arguments shell.posix, activate, and then the rest of the arguments that got passed to this function (i.e., the environment name that we want to activate).
Another step into the rabbit hole... From here, the conda executable calls the cli.main function and since the first argument starts with shell., it imports the main function from conda.activate. This function creates an instance of the Activator class (defined in the same file) and runs the execute method.
The execute method processes the arguments and stores the passed environment name into an instance variable, then decides that the activate command has been passed, so it runs the activate method.
Another step into the rabbit hole... The activate method calls the build_activate method, which calls another function to process the environment name to find the environment prefix (i.e., which folder the environment is in). Finally, the build_activate method adds the prefix to the PATH via the _add_prefix_to_path method. Finally, the build_activate method returns a dictionary of commands that need to be run to "activate" the environment.
And another step deeper... The dictionary returned from the build_activate method gets processed into shell commands by the _yield_commands method, which are passed into the _finalize method. The activate method returns the value from running the _finalize method which returns the name of a temp file. The temp file has the commands required to set all of the appropriate environment variables.
Now, stepping back out, in the activate.main function, the return value of the execute method (i.e., the name of the temp file) is printed to stdout. This temp file name gets stored in the Bash variable ask_conda back in the _conda_activate Bash function, and finally, the temp file is executed by the eval Bash function.
Phew! I hope I got everything right. As I said, I'm not a conda developer, and far from a Bash expert, so please excuse any explanation shortcuts I took that aren't 100% correct. Just leave a comment, I'll be happy to fix it!
I should also note that the recommended method to activate environments in conda >=4.4 is conda activate env-name, which is one of the reasons this is so convoluted - the activation is mostly handled in Python now, whereas (I think) previously it was more-or-less handled directly in Bash/CMD.

Rails - Run system command in production

I'm trying to run a C++ executable in my Rails app that is place in a folder called "algo", like this:
result = `cd algo && ./my_main #{str} -1 -1 #{id}`
In development works flawlessly but in production in the cloud does not run
Consider that:
1) In the cloud, that is a virtual machine, i run the same executable without problems in the console terminal navigate through the Rails application folders, it only fails when i try to run it from the Rails application
2)
Rails.logger.info result
Returns nothing
3)
Rails.logger.info `pwd`
Does return the current folder of the proyect
4)
Rails.logger.info $?
Only returns: pid 35314 exit 127
5)
Rails.logger.info File.exist?("algo/my_main")
Returns true
6)
In the config/environments/production.rb the log level is config.log_level = :info
7)
In the log/production.log does not appear any error like you will see in development in the terminal
8)
I also try to use other commands like: system(), exec(), %x() with the same result
9)
Finally, i run sudo chmod -R 777 in the virtual machine, in the main folder before the Rails folder app, i think that is implicit in the point 1, but for clarify
You should always use absolute paths for any code that will be executed by a script. The PATH variable may be different for the user executing the script than it is for the user that you use, and its much better to be 100% precise about the file path you want than to rely on PATH.
Along the same lines, make sure the user that runs the Rails server have execute permissions on the script. If in doubt, login as that user and attempt to execute the script.
You also need to escape both str and id for security reasons. Even if these variables are not currently derived in any way from submitted parameters, there's always a possibility that whatever function contains this code might be executed with user-submitted variables at some point. Basically, its better to be safe than sorry, because this is the kind of security hole that could allow anyone on the Internet to execute arbitrary code on your server.

Reload environment variables PATH from chef recipes client

is it possible to reload $PATH from a chef recipe?
Im intrsted in the response about process signals given in the following thread:
How to have Chef reload global PATH
I dont understand very well that example that the omribahumi user gives.
I would like a clearer example with chef-client / recipe to understand,
with that he explains, seems it is possible with that workaround.
Thanks.
Well I see two reasons for this request:
add something to the path for immediate execution => easy, just update the ENV['PATH'] variable within the chef run.
Extend the PATH system wide to include something just installed.
For the 2, you may update /etc/environment file (for ubuntu) or add a file to /etc/profiled.d (better idea to keep control over it),
But obviously the new PATH variable won't be available to actually running processes (including your actual shell), it will work for process launched after the file update.
To explain a little more the link you gave what is done is:
create a file with export commands to set env variables
echo 'export MYVAR="my value"' > ~/my_environment
create a bash function loading env vars from a file
function reload_environment { source ~/my_environment; }
set a trap in bash to do something on a signal, here run the function when bash receive SIGHUP
trap reload_environment SIGHUP
Launch the function for a first sourcing of the env file, there's two way:
easy one: launch the function
reload_environment
complex one: Get the pid of your actual shell and send it a SIGHUP signal
kill -HUP `echo $$`
All of this is only for the current shell until you set this in your .bash_rc
Not exactly what you were asking for indeed, but I hope you'll understand there's no way to update context of an already running process.
The best you can do is: update the PATH with whatever method you wish (something in /etc/profile.d for exemple) and execute a wall (if chef run as root) to tell users to reload their envs
echo 'reload your shell env by executing: source /etc/profile' | wall
Once again, it could work for humans, not for other process already running, those will have to be restarted.

Resources