Nomad Runtime environment and value interpolation failed to load environment value - docker

I was creating a Nomad job for airflow worker node using docker driver and trying to read from the env about the IP address or hostname of the current nomad client.
I first tried in task template, HOST_NAME="{{ env "attr.unique.hostname" }}" works fine and I can print out the correct host name in log.
However, when I try to fill in the config parameter such as hostname (https://www.nomadproject.io/docs/drivers/docker.html#hostname) using variable interpolation (https://www.nomadproject.io/docs/runtime/interpolation), such as hostname = ${attr.unique.hostname} I got such complaint during terraform deployment
Invalid value for "vars" parameter: vars map does not contain key "attr",
I also tried runtime environment (https://www.nomadproject.io/docs/runtime/environment) and fill in the config parameter such as ipv4_address = "$NOMAD_IP_<label>:$NOMAD_HOST_PORT_<label>" (https://www.nomadproject.io/docs/drivers/docker.html#ipv4_address). It failed to pass in the right value either.

Related

Pass variable name in Jenkins Vault secrets path

I am not able to pass ${environment} in the vault secret path for reading the values.
May be secret getting initialized before variables are getting set.
Kindly help as I'm not able to read environment-specific values from the same vault repo.
It worked pretty nicely for me using a choice parameter in a parameterized build. I think your issue is in the used Vault path (vault/secret/$environment). I think the correct in your case is just "secret/$environment". Does your secret engine start with "vault"?
Just FYI, if you define the variable in "Jenkins > Manage Jenkins > Configure System > Environment variables" it'll work too.

How to pass in AWS environmental variables to Nextflow for use in Docker container

I would like to run a Nextflow pipeline through a Docker container. As part of the pipeline I would like to push and pull from AWS. To accomplish this end, I need to pass in AWS credentials to the container, but I do not want to write them into the image.
Nextflow has an option to pass in environmental variables as part of the Docker scope via the envWhitelist option, however I have not been able to find an example for correct syntax when doing this.
I have tried the following syntax and get an access denied error, suggesting that I am not passing in the variables properly.
docker {
enabled = true
envWhitelist = "AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID"
}
I explicitly passed these variables into my environment and I can see them using printenv.
Does this syntax seem correct? Thanks for any help!
Usually you can just keep your AWS security credentials in a file called ~/.aws/credentials:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not defined in the
environment, Nextflow will attempt to retrieve credentials from your
~/.aws/credentials or ~/.aws/config files.
Alternatively, you can declare your AWS credentials in your nextflow.config (or in a separate config profile) using the aws scope:
aws {
accessKey = '<YOUR S3 ACCESS KEY>'
secretKey = '<YOUR S3 SECRET KEY>'
region = '<REGION IDENTIFIER>'
}
You could also use an IAM Instance Role to provide your credentials.

dollar sign in environment variable specified when launching docker container

I'm running redmine in a docker container. Within redmine I want to send email through smtp. To do that I need to set environment variables when launching the container, eg.:
docker run --name=redmine ... --env='SMTP_HOST=host.com' --env='SMTP_USER=user#host.com' --env='SMTP_PASS=$mypassword'
I didn't choose the password, and unfortunately it really starts with a dollar sign. If I just provide the password as is in the container the variable SMTP_PASS is empty, as there is no variable 'mypassword' defined. How to specify the password containing the $-sign?
You can escape it with a backslash: --env='SMTP_PASS=\$mypassword'

Set up admin password from environment variable

I have deployed influxdb2 as a statefulset in my k8s cluster.
I have set environment variables as follow :
DOCKER_INFLUXDB_INIT_MODE=setup
DOCKER_INFLUXDB_INIT_USERNAME=admin
DOCKER_INFLUXDB_INIT_PASSWORD=Adm1nPa$$w0rd
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=Adm1nT0k3n
First time I run my manifest, it works just fine and I can log in to the GUI using the provided secrets.
Now I want to rotate those secrets, so I change those variables and redeploy my statefulset and find this error :
2022-06-15T11:35:46. info found existing boltdb file, skipping setup wrapper {"system": "docker", "bolt_path": "/var/lib/influxdb2/influxd.bolt"}
Indeed, if I log into my pod I can browse into /var/lib/influxdb2/influxd.bolt and find the previous admin's secrets value : Adm1nT0k3n and Adm1nPa$$w0rd.
How can I force influxdb2 to use the new environment variables DOCKER_INFLUXDB_INIT_PASSWORD and DOCKER_INFLUXDB_INIT_TOKEN ?

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

Resources