Setting the LOGGING__CONSOLE__FORMATTERNAME in aspnet6.0 image - docker

How do I set this environment variable when I run a .NET 6.0 docker file?
I have a docker image based off aspnet6.0 Docker file. By default the environment variable is set as Json. I want to set it to Simple without changing the code.
The console log formatter is here: https://learn.microsoft.com/en-us/dotnet/core/extensions/console-log-formatter
Can this be done?
I thought it would as simple as:
docker run --env LOGGING__CONSOLE__FORMATTERNAME=Simple <CONTAINER_NAME>
This does set an environment variable with this name. However it does not overwrite the environment variable it results in a duplicate setting. I would expect making the above command to overwrite the setting for: LOGGING__CONSOLE__FORMATTERNAME.
Therefore my console is still formatted as JSON and not in Simple format as I expect.

The reason is it not overriding the setting is the capitalisation should match the existing environment variable: Logging__Console__FormatterName
However the Docker container dotnet/aspnet has now reverted to the behaviour in 5.0 by not overriding the log format to JSON.
https://learn.microsoft.com/en-us/dotnet/core/compatibility/containers/6.0/console-formatter-default

Related

How to pass an environement variable to spark-defaults.conf

I want to run apache spark history on a docker image, to achieve this I had to change spark-defaults.conf and add this line
spark.history.fs.logDirectory /path/to/remote/logs
And then run start-history-server.sh
This work fine when I set the value statically, however I want the value to be set from an environement variable that will be set on the docker container on run time, so I want something like this:
spark.history.fs.logDirectory ${env.path_to_logs}
However this doesn't work since the spark-defaults.conf deosn't access env variable, so is there a solution for this or maybe add a parameter when running start-history-server.sh ?

going from .env to environment variables

So I have been tasked with taking an existing dockerized version of a service, and creating docker images from this repository.
Creating the images is not the problem however, since the build command starts it up no problem. The issue is that this dockerfile copies an .env file during build, that holds variables that must be customizable after the build process is done (expected db and other endpoint info).
Is there some way to set that file to automatically be changed to reflect the environmental variables used in the docker run command? (I do want to note, that the docker image does copy the .env file into the working directory, it is not docker-compose reading that .env file)
I am sure that there has to be an easy way to do this, but all the tutorials I am pulling up just show you how to declare these variables, not how to get the files in docker to use them! Most of the code being run is javascript, and uses npm and yarn if that makes any difference...
docker does not provide any way to update files from environment variables on container start. But I don't think this is what you need anyway:
As I understand a .env file with default values is copied into the image at build time and you want to be able to change some of the values at runtime via container environment variables?
Usually such an .env file is read by the application and complemented by any variables set in the environment, i.e. you can override values from the file with environment variables. For javascript projects dotenv is a popular module to do this.
So to override say an API_ENDPOINT variable specified in .env you simply need to pass an environment variable with the same name and desired value to the container:
docker run -e API_ENDPOINT=/other/endpoint ...
If for some reason your applications do not work according to this convention and you actually need to change the values in the .env file you will need to write a custom script that updates/generates .env from the values of passed environment variables and use this script as ENTRYPOINT

How to make environment variable visible to non root users inside the container?

I am trying to pass on environment variables to be read from an XML file inside a docker container running wildly app service and hosted inside REHL 7 image.
What I've done so far:
I've created an environment file as key value pair, for example: FILESERVICE_MAX_POOL_SIZE=5
I am running docker by referencing the environment file: docker run -d --env-file ./ENV_VARIABLES <myImage>
In the Dockerfile I copy the xml template I need: COPY dockerfiles/standalone.xml /opt/wildfly/standalone/configuration/standalone.xml
Inside the XML template I'm trying to reference the environment variable: <max-pool-size>${env.FILESERVICE_MAX_POOL_SIZE}</max-pool-size>
I can see those environment variables inside the running container as root but not as the wildly user which needs them. How can I make an attribute visible to a specific user other than root ?
Clearly I'm doing something fundamentally wrong here just not sure what ?
Thanks in advance for your help.
Problem solved: wildfly couldn't see the attributes because in my startup script I didn't add the -E flag for sudo to preserve environment variables.

Docker - Changes to postfix's main.cf file are overriden on container start

I am trying to setup a dockerized Nagios. For that, I am using the already working image from jasonrivers: Dockerfile
Now, I need to slightly adjust the postfix, that is already installed in the image. I need to setup a relayhost so that e-mails that are sent from nagios are forwarded to my Mail-Server. Which should be as simple as setting the "relayhost" property in /etc/postfix/main.cf.
However, no matter how I adjust this value in my Dockerfile (I tried doing it with both sed and a COPY), when I inspect the /etc/postfix/main.cf file after starting the container the relayhost value was overridden to an empty value.
At first I thought that this has to do something with docker itself, I thought that somehow my steps in the Dockerfile that adjust this file did not end up affecting the final image. However, when I override main.cf with gibberish (like setting it's content to just "foo") then upon running the image, postfix throws some errors about it.
To put the words into code, consider this Dockerfile:
FROM jasonrivers/nagios:latest
RUN echo "relayhost = www.abc.com" > /etc/postfix/main.cf
Building this and then running the resulting image will result in a /etc/postfix/main.cf file with contents
relayhost =
I have tried using google to figure out how postfix works and why it does that, but the only suggestion I found was that something is configured in "master.cf", which it is not (you can download the image yourself and test all this yourself).
The JasonRivers/Docker-Nagios repo for the image has a feature in the postfix startup script to modify that setting overlay/etc/sv/postfix/run:
sed -i "s/relayhost =.*/relayhost = ${MAIL_RELAY_HOST}/" /etc/postfix/main.cf
Set the MAIL_RELAY_HOST environment variable to your host.
ENV MAIL_RELAY_HOST=www.abc.com

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

Resources