Cloud Foundry documentation clearly states that CF runtime ensures availability of the environment variables even when the application is stopped and restarted. I understand that user provided or application specific environment variables can be unset via the cf cli unset-env command.
But are there any other situations where they can be unset or lost without explicitly being unset, e.g during a subsequent update of the application or redeployment or creation of a new container or droplet?
In other words, what is the validity of an environment variable that I set via set-env command, assuming I don't include it as part of the manifest yaml in any subsequent deployment of my application? Does CF runtime ensure it is available for my application forever until I explicitly unset it?
Yes, it's available forever until you explicitly unset it.
Related
i would like to provide our container with an optional OIDC/keycloak integration, disabled by default but possible to enable when starting a container via env variables.
This is how the configuration looks like in application.properties at build time:
quarkus.oidc.enabled=false
# quarkus.oidc.auth-server-url=<auth-server-url>
# quarkus.oidc.client-id=<client-id>
# quarkus.oidc.credentials.secret=<secret>
Ideally, on container start, quarkus.oidc.enabled=true could be set along side the other three properties via container env variables.
However, quarkus won't allow this, as quarkus.oidc.enabled can only be set on build time apparently, but not overridden at runtime (https://quarkus.io/guides/security-openid-connect#configuring-the-application).
I have found a google group that picks up on this topic (https://groups.google.com/g/quarkus-dev/c/isGqZvY829g/m/BNerQvSRAQAJ), mentioning the use of quarkus.oidc.tenant-enabled=false instead, but i am not sure how to apply this strategy in my use case.
Can anyone help me out here on how to make this work without having to build two images (one with oidc enabled, and one without) ?
I am working on a migration task from an on-premise system to a cloud composer, the thing is that Cloud composer is a fully managed version of airflow which restrict access to file systems behind, actually on my on-premise system I have a lot of environment variables for some paths we're saving them like /opt/application/folder_1/subfolder_2/....
When looking at the Cloud composer documentation, they said that you can access and save your data on the data folder which is mapped by /home/airflow/gcs/data/ which implies that in case I move forward that mapping, I will be supposed to change my env variables values to something like : /home/airflow/gcs/data/application/folder_1/folder_2 things that could be a bit painful, knowing that I'm running many bash scripts that rely on those values.
Is there any approach to solve such problem ?
You can specify your env variables during Composer creation/update process [1]. These vars are then stored in the YAML files that create the GKE cluster where Composer is hosted. If you SSH into a VM running the Composer GKE cluster, then enter one of the worker containers and run env, you can see the env variables you specified.
[1] https://cloud.google.com/composer/docs/how-to/managing/environment-variables
Trying to setup our .NET framework application in Windows AKS and need an elegant way to pass in ApplicationSettings.config & Connectionstrings.config as per environment setup.. trying to use life-cycle hooks & init containers but no luck so far..
any recommendations?
Thanks
When delivering applications in a containerized format such as Docker image on k8s cluster a common pattern is that configuration is read from environment variables.
When drawing configuration from environment variables at runtime, you can set up your pods so that data stored in ConfigMaps is injected into environment variables.
If you want to avoid a code change, here is an excellent article on generating your config files based on environment variables using a startup script.
With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.
I'm new to LXC containers and am using LXC v2.0. I want to pass settings to the processes running inside my container (specifically command line parameters for their Systemd service files.
I'm thinking of passing environment variables to the container via the config file lxc.environment = ABC=DEF . (I intend to use SALT Stack to manipulate these variables). Do I manually have to parse /proc/1/environ to access these variables or is there a better way I'm missing?
The documentation says:
If you want to pass environment variables into the container (that is, environment variables which will be available to init and all of its descendents), you can use lxc.environment parameters to do so.
I would assume that, since all processes - including the shell - are descendents of the init process, the environment should be available in every shell. Unfortunately, this seems not to be true. In a discussion on linuxcontainers.org, someone states:
That’s not how this works unfortunately. Those environment variables are passed to anything you lxc exec and is passed to the container’s init system.
Unfortunately init systems usually don’t care much for those environment variables and never propagate them to their children, meaning that they’re effectively just present in lxc exec sessions or to scripts which directly look at PID 1’s environment.
So yes, obviously parsing /proc/1/environ seems to be the only possibility here.