Travis: Can you construct env var using other env vars? - environment-variables

What I'm looking to do is setup my travis.yml to use the appropriately named environment vars based on which branch is being built.
I'm thinking along the lines of, if I have stored in travis $DEV_ARTIFACTS_KEY / $TEST_ARTIFACTS_KEY etc
Then I push to DEV branch, so $TRAVIS_BRANCH = DEV
I can then do something like: ARTIFACTS_KEY=${$TRAVIS_BRANCH}_ARTIFACTS_KEY
and it becomes: ARTIFACTS_KEY=$DEV_ARTIFACTS_KEY
Obviously the above syntax with {} doesn't work, or I wouldn't be here! Wondering whether in theory this method is possible, and if so, how!?
EDIT: For further detail on what I'm trying to do
I'm wanting to set 3 variables in the above manner.
ARTIFACTS_KEY, ARTIFACTS_SECRET & ARTIFACTS_BUCKET created on the fly from
{BRANCH}_ARTIFACTS_KEY, {BRANCH}_ARTIFACTS_SECRET, {BRANCH}_ARTIFACTS_BUCKET
I have it working without the branch variables like:
env:
- ARTIFACTS_KEY=$DEV_ARTIFACTS_KEY ARTIFACTS_SECRET=$DEV_ARTIFACTS_SECRET ARTIFACTS_BUCKET=$DEV_ARTIFACTS_BUCKET
where $DEV_ARTIFACTS_KEY etc are defined in Travis. However I've so far failed at replacing the DEV portion with a branch name on the fly.

This works in bash and therefore should also work in Travis:
# setup dummy values
TRAVIS_BRANCH=DEV
DEV_ARTIFACTS_KEY=dev-artifacts-key-value
DEV_ARTIFACTS_SECRET=dev-artifacts-secret-value
DEV_ARTIFACTS_BUCKET=dev-artifacts-bucket-value
# actual lines you want
eval ARTIFACTS_KEY=\$${TRAVIS_BRANCH}_ARTIFACTS_KEY
eval ARTIFACTS_SECRET=\$${TRAVIS_BRANCH}_ARTIFACTS_SECRET
eval ARTIFACTS_BUCKET=\$${TRAVIS_BRANCH}_ARTIFACTS_BUCKET
# test results
echo "key=$ARTIFACTS_KEY"
echo "secret=$ARTIFACTS_SECRET"
echo "bucket=$ARTIFACTS_BUCKET"
Whenever you have the name of a variable inside another variable, you need eval to interpret it. When bash sees this line:
eval ARTIFACTS_KEY=\$${TRAVIS_BRANCH}_ARTIFACTS_KEY
It first expands the variable (and leaves the escaped $ alone):
eval ARTIFACTS_KEY=\$DEV_ARTIFACTS_KEY
Then it executes eval on the string ARTIFACTS_KEY=$DEV_ARTIFACTS_KEY which in turn expands $DEV_ARTIFACTS_KEY and assigns the value to ARTIFACTS_KEY.

Related

Get all environment variables of a Docker container including security variables

Is there a way to get all environment variables that a Docker image accepts? Including authentications and all possible ones to make the best out of that image?
For example, I've run a redis:7.0.8 container and I want to use every possible feature this image offers.
First I used docker inspect and saw this:
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.16",
"REDIS_VERSION=7.0.8",
"REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-7.0.8.tar.gz",
"REDIS_DOWNLOAD_SHA=06a339e491306783dcf55b97f15a5dbcbdc01ccbde6dc23027c475cab735e914"
],
I also tried docker exec -it my-container env which just showed me the same thing. I know there are more variables, for example this doesn't include the following:
REDIS_PASSWORD
REDIS_ACLS
REDIS_TLS_CERT_FILE
Absent documentation, this is pretty much impossible.
Let's start by repeating #jonrsharpe's comment:
They accept any env var at all, but they won't respond to all of them.
Consider this Python code, for example:
import os
def get_environ(d, name):
d.get(name, 'absent')
foo = os.environ.get('FOO', 'default_foo')
star_foo = get_environ(os.environ, foo)
print(star_foo)
This fragment looks up an environment variable $FOO. You could probably figure that out, if you knew the main process was in Python and recognized os.environ. But then it passes that value and the standard environment to a helper function, which looks up that environment variable by name. You'd need detailed static analysis to understand this is actually also an environment-variable lookup.
$ ./test.py
absent
$ default_foo=bar ./test.py
bar
$ FOO=BAR BAR=quux ./test.py
quux
$ I=3 ./test.py
absent
(A fair bit of the code I work with accesses environment variables rather haphazardly; it's not just "find the main function" but "find every ENV reference in every file in every library". Some frameworks like Spring Boot make it possible to set hundreds of configuration options via environment variables, and even if it were possible to get every possible setting here, the output would be prohibitive.)
"What environment variables are there" isn't standard container metadata. You'd have to identify the language the main container process runs, and do this sort of analysis on it, including compiled languages. That doesn't seem like a solvable problem.

terraform doesn't load environment variables set in fish

In the root folder of my project next to main.tf, I have a script called load_env.fish containing these two lines:
set -U AWS_SHARED_CREDENTIALS_FILE "~/path/to/file"
set -U AWS_PROFILE "my_profile"
I run that, then I run the command terraform import foo bar. It gives me Access Denied.
However, if I use bash instead of fish, and I set up the same environment variables, then terraform import foo bar works.
And I can even get it to work in fish if I do this:
from bash, set up environment variables
start the fish shell from bash
now in the fish shell, run terraform import foo bar
So,
Why does it work if I use bash and not fish? And why does it work in fish if the fish shell is opened from a bash shell that has the correct environment variables set?
How can I use terraform in the fish shell without having to open nested bash and fish shells?
Universal variables are shared between all fish sessions, but they are not automatically exported to subprocesses.
I simply changed all instances of set -U ... to set -Ux ... and everything worked.
EDIT: After seeing KurtisRader's comment concerning the downside of set -Ux and reading a bit more, I realize now that fish has the source command just like bash. So, inside the script I can just use
set -x foo bar
Then I can
$ source load_env.fish
instead of just
$ ./load_env.fish

Dockerfile single line `ENV` composing variables not working

I want to compose two environment variables: first define a "root" and in the same line use that to create a composed one. In example, filename and append extension.
Doing this container,
FROM centos:7
ENV ROOT_VAR=stringy ROOT_VAR_TGZ=${ROOT_VAR}.tar.gz
RUN echo ${ROOT_VAR} $ ${ROOT_VAR_TGZ}
The output for echo is
stringy $ .tar.gz
But when splitting each variable in an individual ENV command is composed correctly.
Is this the expected behaviour?
The behaviour is clearly explained in the docker reference document:
Environment variable substitution will use the same value for each variable throughout the entire instruction. In other words, in this example:
ENV abc=hello
ENV abc=bye def=$abc
ENV ghi=$abc
will result in def having a value of hello, not bye. However, ghi will have a value of bye because it is not part of the same instruction that set abc to bye.

activating conda env vs calling python interpreter from conda env

What exactly is the difference between these two operations?
source activate python3_env && python my_script.py
and
~/anaconda3/envs/python3_env/bin/python my_script.py ?
It appears that activating the environment adds some variables to $PATH, but the second method seems to access all the modules installed in python3_env. Is there anything else going on under the hood?
You are correct, activating the environment adds some directories to the PATH environment variable. In particular, this will allow any binaries or scripts installed in the environment to be run first, instead of the ones in the base environment. For instance, if you have installed IPython into your environment, activating the environment allows you to write
ipython
to start IPython in the environment, rather than
/path/to/env/bin/ipython
In addition, environments may have scripts that add or edit other environment variables that are executed when the environment is activated (see the conda docs). These scripts can make arbitrary changes to the shell environment, including even changing the PYTHONPATH to change where packages are loaded from.
Finally, I wrote a very detailed answer of what exactly is happening in the code over there: Conda: what happens when you activate an environment? That may or may not still be up-to-date though. The relevant part of the answer is:
...the build_activate method adds the prefix to the PATH via the _add_prefix_to_path method. Finally, the build_activate method returns a dictionary of commands that need to be run to "activate" the environment.
And another step deeper... The dictionary returned from the build_activate method gets processed into shell commands by the _yield_commands method, which are passed into the _finalize method. The activate method returns the value from running the _finalize method which returns the name of a temp file. The temp file has the commands required to set all of the appropriate environment variables.
Now, stepping back out, in the activate.main function, the return value of the execute method (i.e., the name of the temp file) is printed to stdout. This temp file name gets stored in the Bash variable ask_conda back in the _conda_activate Bash function, and finally, the temp file is executed by the eval Bash function.
So you can see, depending on the environment, running conda activate python3_env && python my_script.py and ~/anaconda3/envs/python3_env/bin/python my_script.py may give very different results.

Conda: what happens when you activate an environment?

How does running source activate <env-name> update the $PATH variable? I've been looking in the CONDA-INSTALLATION/bin/activate script and do not understand how conda updates my $PATH variable to include the bin directory for the recently activated environment. No where can I find the code that conda uses to prepend my $PATH variable.
Disclaimer: I am not a conda developer, and I'm not a Bash expert. The following explanation is based on me tracing through the code, and I hope I got it all right. Also, all of the links below are permalinks to the master commit at the time of writing this answer (7cb5f66). Behavior/lines may change in future commits. Beware: Deep rabbit hole ahead!
Note that this explanation is for the command source activate env-name, but in conda>=4.4, the recommended way to activate an environment is conda activate env-name. I think if one uses conda activate env-name, you should pick up the explanation around the part where we get into the cli.main function.
For conda >=4.4,<4.5, looking at CONDA_INST_DIR/bin/activate, we find on the second to last and last lines (GitHub link):
. "$_CONDA_ROOT/etc/profile.d/conda.sh" || return $?
_conda_activate "$#"
The first line sources the script conda.sh in the $_CONDA_ROOT/etc/profile.d directory, and that script defins the _conda_activate bash function, to which we pass the arguments $# which is basically all of the arguments that we passed to the activate script.
Taking the next step down the rabbit hole, we look at $_CONDA_ROOT/etc/profile.d/conda.sh and find (GitHub link):
_conda_activate() {
# Some code removed...
local ask_conda
ask_conda="$(PS1="$PS1" $_CONDA_EXE shell.posix activate "$#")" || return $?
eval "$ask_conda"
_conda_hashr
}
The key is that line ask_conda=..., and particularly $_CONDA_EXE shell.posix activate "$#". Here, we are running the conda executable with the arguments shell.posix, activate, and then the rest of the arguments that got passed to this function (i.e., the environment name that we want to activate).
Another step into the rabbit hole... From here, the conda executable calls the cli.main function and since the first argument starts with shell., it imports the main function from conda.activate. This function creates an instance of the Activator class (defined in the same file) and runs the execute method.
The execute method processes the arguments and stores the passed environment name into an instance variable, then decides that the activate command has been passed, so it runs the activate method.
Another step into the rabbit hole... The activate method calls the build_activate method, which calls another function to process the environment name to find the environment prefix (i.e., which folder the environment is in). Finally, the build_activate method adds the prefix to the PATH via the _add_prefix_to_path method. Finally, the build_activate method returns a dictionary of commands that need to be run to "activate" the environment.
And another step deeper... The dictionary returned from the build_activate method gets processed into shell commands by the _yield_commands method, which are passed into the _finalize method. The activate method returns the value from running the _finalize method which returns the name of a temp file. The temp file has the commands required to set all of the appropriate environment variables.
Now, stepping back out, in the activate.main function, the return value of the execute method (i.e., the name of the temp file) is printed to stdout. This temp file name gets stored in the Bash variable ask_conda back in the _conda_activate Bash function, and finally, the temp file is executed by the eval Bash function.
Phew! I hope I got everything right. As I said, I'm not a conda developer, and far from a Bash expert, so please excuse any explanation shortcuts I took that aren't 100% correct. Just leave a comment, I'll be happy to fix it!
I should also note that the recommended method to activate environments in conda >=4.4 is conda activate env-name, which is one of the reasons this is so convoluted - the activation is mostly handled in Python now, whereas (I think) previously it was more-or-less handled directly in Bash/CMD.

Resources