I'm new to LXC containers and am using LXC v2.0. I want to pass settings to the processes running inside my container (specifically command line parameters for their Systemd service files.
I'm thinking of passing environment variables to the container via the config file lxc.environment = ABC=DEF . (I intend to use SALT Stack to manipulate these variables). Do I manually have to parse /proc/1/environ to access these variables or is there a better way I'm missing?
The documentation says:
If you want to pass environment variables into the container (that is, environment variables which will be available to init and all of its descendents), you can use lxc.environment parameters to do so.
I would assume that, since all processes - including the shell - are descendents of the init process, the environment should be available in every shell. Unfortunately, this seems not to be true. In a discussion on linuxcontainers.org, someone states:
That’s not how this works unfortunately. Those environment variables are passed to anything you lxc exec and is passed to the container’s init system.
Unfortunately init systems usually don’t care much for those environment variables and never propagate them to their children, meaning that they’re effectively just present in lxc exec sessions or to scripts which directly look at PID 1’s environment.
So yes, obviously parsing /proc/1/environ seems to be the only possibility here.
Related
I'm learning Podman so I apologize for silly mistakes.
Docker Redis has a notorious problem where the database might fail to write if in the container /proc/sys/vm/overcommit_memory is not set to 1 (the default value is 0).
I know that Podman doesn't use a Daemon and I thought that this could allow it to set specific host environmental values per container (Docker doesn't allow this -- one must change the host variables and then all containers being created will copy them; thus, if you set a different value for the host environmental variables it will be applied to all subsequent containers, there's no way to apply it only to a specific one).
The documentation says that, for --env, "--env: Any environment variables specified will override previous settings". But alas, I think it behaves identically to Docker and doesn't allow one to change host env per container. I tried podman run .... --env 'overcommit_memory=1' and it made no difference at all. I guess this approach would only make sense for general environmental variables and not for specific vm ones.
But I was curious: is it possible at all to change host env per container in Podman? And in specific, is there any way to change /proc/sys/vm/overcommit_memory per container?
EDIT: can podman play kube be of any help?
EDIT2: one might wonder why I don't encapsulate the podman run command with echo 1 > overcommit_memory and afterwards revert to echo 0 > overcommit_memory, but I need to use a Windows machine to develop this and I think this wouldn't be possible
EDIT 3: Eureka! Found a solution [not really, see criztovyl's answer] to my original problem, I just need create a dir (say, mkdir vm), add a overcommit_memory file to it with content equal to 1, and add to the podman run instruction -v vm:/proc/sys/vm:rw. This way a volume is bound to the container in rw mode and rewrites the value of overcommit_memory. But I'm still curious as to whether there's a more straightforward way to change that env
EDIT 4: Actually, COPY init.sh is the best option so far https://r-future.github.io/post/how-to-fix-redis-warnings-with-docker/ [again, not really, see criztovyl's answer below]
As Richard says, it does not seem to be currently possible to set vm.overcommit_memory per-container, you must set it at the host via sysctl (not --sysctl).
This applies to both Podman and Docker, as for the "actual" container they both in the end rely on the same kernel APIs (cgroups).
Note that you say "changing the host env", which can be misinterpreted as changing the host's environment variables. Overwriting environment variables is possible as you tried with --env.
But memory overcommit is a kernel parameter which you must set via sysctl, not environment variables.
Certain sysctl options are overwriteable via --sysctl, but vm.overcommit_memory is, as far as i can tell, no such option.
Regarding your first edit: kube play only is a fancy way to "import" pods/containers described in kubernetes yaml format. in the end it does not set different options you could not also use manually.
Regarding your second edit: I dont think for development you need to toggle it, it should be okay to just keep it enabled. Or keep it disabled altogether because during development the warning and potential failing writes should be acceptable.
Your option 3 only silences the warning, but the database write can still fail, it only makes it appear to redis overcomitting is enabled, but actually overcommit is still disabled.
Your option 4 works with a privileged container, enabling overcommit for the whole host. As such it is a convenience option for when you can run privileged containers (eg during development), but it will fail when you cannot run privileged (eg in production).
I'm using docker API to manage my containers from a front-end application and I would like to know if it was possible to use /container/{id}/start with some environnement variables, i can't find it in the official doc.
Thanks !
You can only specify environment variables when creating a container. Starting it just starts the main process in the container that already exists with its existing settings; the “start” API call has almost no options beyond the container ID. If you’ve stopped a container and want to restart it with different options, you need to delete and recreate it.
With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.
In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.
Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?
I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.
Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.
A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.
Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.
Yes , since when you mount the actual value is not visible through docker inspect or other Pod management tools. More over you can enforce file level access at the file system level of the Host for those files.
More suggested reading is here Kubernets Secrets
Secrets in Kearse used to store sensitive information like passwords, ssl certificates.
You definitely want to mount ssl certs as files in container rather sourcing them from environment variab
les.
I have a Python API that has to know its public address to properly create links to itself (needed when doing paging and other HATEOAS stuff) in the responses it creates. The address is given to the application as an environment variable.
In production it's handled by Terraform, but I also have extensive local tests that make use of Docker Compose. In tests for paging I need to be aware of the fact that I'm running locally and I need to replace the placeholder address I'm putting in the app's env with http://localhost:<apps_bound_port> for following the links.
I don't want to do that. I'd like to have a way to put the port assigned by Docker in the app's environment variables. The problem wouldn't be there if I was using fixed ports (then I could just put something like http://localhost:8000 in the public addres variable), because I can have multiple instances of Compose running, which wouldn't work then.
I know I can pass environment variables from the shell running docker-compose to the containers, but I don't know of a way to insert the generated port using this approach.
Only solution that I have for my problem now is to find a free port before Compose runs, and then pass it as an environment variable (API_PORT=<FREE_PORT> docker-compose up), while setting up the port like this in docker-compose.yml:
ports:
- "8000:${API_PORT}"
This isn't ideal, because I run Compose both from the shell (with make) and from Python tests, so I'd need to put the logic for getting the port into an env variable in both places.
Is there something I'm missing, or should I create a feature request for Docker Compose?
It seems it is not possible to use the environment variables created by a specific process in a docker container by another process in the same container. Can someone please confirm that?
Thanks.
It depends on the relationship between the processes. There is nothing special about environment variables in container processes. The entrypoint/"container root" process gets an environment with link- and custom variables but that's it. Otherwise the general rules apply. Child processes inherent environment from their parents.