Get all environment variables of a Docker container including security variables - docker

Is there a way to get all environment variables that a Docker image accepts? Including authentications and all possible ones to make the best out of that image?
For example, I've run a redis:7.0.8 container and I want to use every possible feature this image offers.
First I used docker inspect and saw this:
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.16",
"REDIS_VERSION=7.0.8",
"REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-7.0.8.tar.gz",
"REDIS_DOWNLOAD_SHA=06a339e491306783dcf55b97f15a5dbcbdc01ccbde6dc23027c475cab735e914"
],
I also tried docker exec -it my-container env which just showed me the same thing. I know there are more variables, for example this doesn't include the following:
REDIS_PASSWORD
REDIS_ACLS
REDIS_TLS_CERT_FILE

Absent documentation, this is pretty much impossible.
Let's start by repeating #jonrsharpe's comment:
They accept any env var at all, but they won't respond to all of them.
Consider this Python code, for example:
import os
def get_environ(d, name):
d.get(name, 'absent')
foo = os.environ.get('FOO', 'default_foo')
star_foo = get_environ(os.environ, foo)
print(star_foo)
This fragment looks up an environment variable $FOO. You could probably figure that out, if you knew the main process was in Python and recognized os.environ. But then it passes that value and the standard environment to a helper function, which looks up that environment variable by name. You'd need detailed static analysis to understand this is actually also an environment-variable lookup.
$ ./test.py
absent
$ default_foo=bar ./test.py
bar
$ FOO=BAR BAR=quux ./test.py
quux
$ I=3 ./test.py
absent
(A fair bit of the code I work with accesses environment variables rather haphazardly; it's not just "find the main function" but "find every ENV reference in every file in every library". Some frameworks like Spring Boot make it possible to set hundreds of configuration options via environment variables, and even if it were possible to get every possible setting here, the output would be prohibitive.)
"What environment variables are there" isn't standard container metadata. You'd have to identify the language the main container process runs, and do this sort of analysis on it, including compiled languages. That doesn't seem like a solvable problem.

Related

My docker container keeps instantly closing when trying to run an image for bigcode-tools

I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.

Passing environment variables from Dockerfile to app code

I have an environment variable on my server (local raspberry pi) that stores a token that I need. When I run the docker container myself, it doesn't seem to have any problems getting the token from the environment variable. The variable is exported in my .bashrc and can be seen with echo.
When running a workflow through github actions with the same steps, the application cannot find the token.
The environment variable is consistently not there when I check for it. After thinking maybe it was having trouble accessing my .bashrc file, I made a github secret and have been trying to pass the value referencing that instead as you can see below.
I have these lines in my Dockerfile:
ARG MY_TOKEN
ENV MY_TOKEN=$MY_TOKEN
And this is my workflow yaml:
docker build --build-arg MY_TOKEN=${{ secrets.MY_TOKEN }} -t my_img ~/my_project
docker run -de MY_TOKEN --restart=always --name=my_container my_img
Any guidance will be greatly appreciated, I feel like I could get this to work by passing system arguments but
I'm unsure if that's good practice
I'd feel better if someone could point out my bonehead mistake thats holding me back before doing something else

DBT - environment variables and running dbt

I am relatively new to DBT and I have been reading about env_var and I want to use this in a couple of situations and I am having difficultly and looking for some support.
firstly I am trying to use it in my profiles.yml file to replace the user and password so that this can be set when it is invoked. When trying to test this locally (before implementing this on our AWS side) I am failing to find the right syntax and not finding anything useful online.
I have tried variations of:
dbt run --vars '{DBT_USER: my_username, DBT_PASSWORD=my_password}'
but it is not recognizing and giving nothing useful error wise. When running dbt run by itself it does ask for DBT_USER so it is expecting it, but doesn't detail how
I would also like to use it in my dbt_project.yml for the schema but I assume that this will be similar to the above, just a third variable at the end. Is that the case?
Thanks
var and env_var are two separate features of dbt.
You can use var to access a variable you define in your dbt_project.yml file. The --vars command-line option lets you override the values of these vars at runtime. See the docs for var.
You should use env_var to access environment variables that you set outside of dbt for your system, user, or shell session. Typically you would use environment variables to store secrets like your profile's connection credentials.
To access environment variables in your profiles.yml file, you replace the values for username and password with a call to the env_var macro, as they do in the docs for env_var:
profile:
target: prod
outputs:
prod:
type: postgres
host: 127.0.0.1
# IMPORTANT: Make sure to quote the entire Jinja string here
user: "{{ env_var('DBT_USER') }}"
password: "{{ env_var('DBT_PASSWORD') }}"
....
Then BEFORE you issue the dbt_run command, you need to set the DBT_USER and DBT_PASSWORD environment variables for your system, user, or shell session. This will depend on your OS, but there are lots of good instructions on this. To set a var for your shell session (for Unix OSes), that could look like this:
$ export DBT_USER=my_username
$ export DBT_PASSWORD=abc123
$ dbt run
Note that storing passwords in environment variables isn't necessarily more secure than keeping them in your profiles.yml file, since they're stored in plaintext and not protected from being dumped into logs, etc. (You shouldn't be checking profiles.yml into source control). You should consider at least using an environment variable name prefixed by DBT_ENV_SECRET_ so that dbt keeps them out of logs. See the docs for more info

How to add a custom environment variables to docker-ejabberd

I am running docker-ejabberd on ECS and all works fine. Now i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container. There is no clear way described even on the docker-ejabberd wiki or anywhere on how to do that simply. Does anyone face a similar situation and how to do that?
For example in the ejabberd.yml i have this section:
sql_server: ${MYSQL_SERVER}
sql_database: ${MYSQL_DATABASE_NAME}
sql_username: ${MYSQL_USERNAME}
sql_password: ${MYSQL_PASSWORD}
sql_port: ${MYSQL_PORT}
I want to pass those vars as env vars while docker run and then replace them before the container run.
Side note: We are using ECS and passing the variables through the task defination without any issue.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Also, I have an idea of replacing the variables in this ejabberd.yml file in the CICD pipeline just before building the image and while getting the code from the git repository and create the image on AWS ECR?
i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container.
The ejabberd.yml file is read and parsed by the yconf library (https://github.com/processone/yconf) , and I doubt it supports such a thing.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Following that recomendation, if you don't want to mess with the whole ejabberd.yml and let a script manipulate it, you can ensure that only those specific options are parametrized:
You can define those vars using a script in a small file, and then include options from that small file into ejabberd.yml using
https://docs.ejabberd.im/admin/configuration/file-format/#include-additional-files
For example, in your ejabberd.yml, put something like this:
include_config_file:
/etc/ejabberd/database.yml:
allow_only: [sql_server, sql_database, sql_username, sql_password, sql_port]
Then write your script, that generates that small file, for example:
$ generate-database-config.sh
$ cat /etc/ejabberd/database.yml
sql_server: "localhost"
sql_database: "ejaup"
sql_username: "ejabberd_test"
sql_password: "ejabberd_test"
sql_port: 3306

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

Resources