It seems it is not possible to use the environment variables created by a specific process in a docker container by another process in the same container. Can someone please confirm that?
Thanks.
It depends on the relationship between the processes. There is nothing special about environment variables in container processes. The entrypoint/"container root" process gets an environment with link- and custom variables but that's it. Otherwise the general rules apply. Child processes inherent environment from their parents.
Related
I'm learning Podman so I apologize for silly mistakes.
Docker Redis has a notorious problem where the database might fail to write if in the container /proc/sys/vm/overcommit_memory is not set to 1 (the default value is 0).
I know that Podman doesn't use a Daemon and I thought that this could allow it to set specific host environmental values per container (Docker doesn't allow this -- one must change the host variables and then all containers being created will copy them; thus, if you set a different value for the host environmental variables it will be applied to all subsequent containers, there's no way to apply it only to a specific one).
The documentation says that, for --env, "--env: Any environment variables specified will override previous settings". But alas, I think it behaves identically to Docker and doesn't allow one to change host env per container. I tried podman run .... --env 'overcommit_memory=1' and it made no difference at all. I guess this approach would only make sense for general environmental variables and not for specific vm ones.
But I was curious: is it possible at all to change host env per container in Podman? And in specific, is there any way to change /proc/sys/vm/overcommit_memory per container?
EDIT: can podman play kube be of any help?
EDIT2: one might wonder why I don't encapsulate the podman run command with echo 1 > overcommit_memory and afterwards revert to echo 0 > overcommit_memory, but I need to use a Windows machine to develop this and I think this wouldn't be possible
EDIT 3: Eureka! Found a solution [not really, see criztovyl's answer] to my original problem, I just need create a dir (say, mkdir vm), add a overcommit_memory file to it with content equal to 1, and add to the podman run instruction -v vm:/proc/sys/vm:rw. This way a volume is bound to the container in rw mode and rewrites the value of overcommit_memory. But I'm still curious as to whether there's a more straightforward way to change that env
EDIT 4: Actually, COPY init.sh is the best option so far https://r-future.github.io/post/how-to-fix-redis-warnings-with-docker/ [again, not really, see criztovyl's answer below]
As Richard says, it does not seem to be currently possible to set vm.overcommit_memory per-container, you must set it at the host via sysctl (not --sysctl).
This applies to both Podman and Docker, as for the "actual" container they both in the end rely on the same kernel APIs (cgroups).
Note that you say "changing the host env", which can be misinterpreted as changing the host's environment variables. Overwriting environment variables is possible as you tried with --env.
But memory overcommit is a kernel parameter which you must set via sysctl, not environment variables.
Certain sysctl options are overwriteable via --sysctl, but vm.overcommit_memory is, as far as i can tell, no such option.
Regarding your first edit: kube play only is a fancy way to "import" pods/containers described in kubernetes yaml format. in the end it does not set different options you could not also use manually.
Regarding your second edit: I dont think for development you need to toggle it, it should be okay to just keep it enabled. Or keep it disabled altogether because during development the warning and potential failing writes should be acceptable.
Your option 3 only silences the warning, but the database write can still fail, it only makes it appear to redis overcomitting is enabled, but actually overcommit is still disabled.
Your option 4 works with a privileged container, enabling overcommit for the whole host. As such it is a convenience option for when you can run privileged containers (eg during development), but it will fail when you cannot run privileged (eg in production).
I'm using docker API to manage my containers from a front-end application and I would like to know if it was possible to use /container/{id}/start with some environnement variables, i can't find it in the official doc.
Thanks !
You can only specify environment variables when creating a container. Starting it just starts the main process in the container that already exists with its existing settings; the “start” API call has almost no options beyond the container ID. If you’ve stopped a container and want to restart it with different options, you need to delete and recreate it.
Let's assume scenario I'm using a set of CLI docker run commands for creating a whole environment of containers, networks (bridge type in my case) and connect containers to particular networks.
Everything works well till the moment I want to have only one such environment at a single machine.
But what if I want to have at the same machine a similar environment to the one I've just created but for a different purpose (testing) I'm having an issue of name collisions since I can't crate and start containers and networks with the same name.
So far I tried to start second environment the same way I did with the first but with prefixing all containers and networks names.That worked but had a flaw: in the application that run all requests to URIs were broken since they had a structure
<scheme>://<container-name>:<port-number>
and the application was not able to reach <prefix-container-name>.
What I want to achieve is to have an exact copy of the first environment running on the same machine as the second environment that I could use to perform the application tests etc.
Is there any concept of namespaces or something similar to it in Docker?
A command that I could use before all docker run etc commands I use to create environment and have just two bash scripts that differ only by the namespace command at their beginning?
Can using virtual machine, ie Oracle Virtualbox be the solution to my problem? Create a VM for the second environment? isn't that an overkill, will it add an additional set of troubles?
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name? Unlucky --hostname only gives ability to access the container by this name form the container itself but not from any other. Perhaps there is an option or command that can make an alias, virtual host or whatever magic common name I could put into apps URIs <scheme>://<magic-name>:<port-number> so creating second environment with different containers and networks names will cause no problem as long as that magic-name is available in the environment network
My need for having exact copy of the environment is because of tests I want to run and check if they fail also on dependency level, I think this is quite simple scenario from the continues integration process. Are there any dedicated open source solutions to what I want to achieve? I don't use docker composer but bash script with all docker cli commands to get the whole env up and running.
Thank you for your help.
Is there any concept of namespaces or something similar to it in Docker?
Not really, no (but keep reading).
Can using virtual machine [...] be the solution to my problem? ... Isn't that an overkill, will it add an additional set of troubles?
That's a pretty reasonable solution. That's especially true if you want to further automate the deployment: you should be able to simulate starting up a clean VM and then running your provisioning script on it, then transplant that into your real production environment. Vagrant is a pretty typical tool for trying this out. The biggest issue will be network connectivity to reach the individual VMs, and that's not that big a deal.
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name?
docker run --network-alias is very briefly mentioned in the docker run documentation and has this effect. docker network connect --alias is slightly more documented and affects a container that's already been created.
Are there any dedicated open source solutions to what I want to achieve?
Docker Compose mostly manages this for you, if you want to move off of your existing shell-script solution: it puts a name prefix on all of the networks and volumes it creates, and creates network aliases for each container matching its name in the YAML file. If your host volume mounts are relative to the current directory then that content is fairly isolated too. The one thing you can't easily do is launch each copy of the stack on a separate host port(s), so you have to resolve those conflicts.
Kubernetes has a concept of a namespace which is in fact exactly what you're asking for, but adopting it is a substantial investment and would involve rewriting your deployment sequence even more than Docker Compose would.
I'm new to LXC containers and am using LXC v2.0. I want to pass settings to the processes running inside my container (specifically command line parameters for their Systemd service files.
I'm thinking of passing environment variables to the container via the config file lxc.environment = ABC=DEF . (I intend to use SALT Stack to manipulate these variables). Do I manually have to parse /proc/1/environ to access these variables or is there a better way I'm missing?
The documentation says:
If you want to pass environment variables into the container (that is, environment variables which will be available to init and all of its descendents), you can use lxc.environment parameters to do so.
I would assume that, since all processes - including the shell - are descendents of the init process, the environment should be available in every shell. Unfortunately, this seems not to be true. In a discussion on linuxcontainers.org, someone states:
That’s not how this works unfortunately. Those environment variables are passed to anything you lxc exec and is passed to the container’s init system.
Unfortunately init systems usually don’t care much for those environment variables and never propagate them to their children, meaning that they’re effectively just present in lxc exec sessions or to scripts which directly look at PID 1’s environment.
So yes, obviously parsing /proc/1/environ seems to be the only possibility here.
I've built a simple Docker Compose project as a development environment. I have PHP-FPM, Nginx, MongoDB, and Code containers.
Now I want to automate the process and deploy to production.
The docker-compose.yml can be extended and can define multiple environments. See https://docs.docker.com/compose/extends/ for more information.
However, there are Dockerfiles for my containers. And for the dev environment are needed more packages than in production.
The main question is should I use separate dockerfiles for dev and prod and manage them in docker-compose.yml and production.yml ?
Separate dockerfiles are easy approach but there is code duplication.
The other solution is to use environment variables and somehow handle them from bash script (maybe as entrypoint ?).
I am searching for other ideas.
According to the official docs:
... you’ll probably want to define a separate Compose file, say
production.yml, which specifies production-appropriate configuration.
Note: The extends keyword is useful for maintaining multiple Compose
files which re-use common services without having to manually copy and
paste.
In docker-compose version >= 1.5.0 you can use environment variables, may be this suits you?
If the packages needed for development aren't too heavy (i.e. the image size isn't significally bigger) you could just create Dockerfiles that include all the components and then decide whether to activate them based on the value of an environment variable in the entrypoint.
That way you would could have the main docker-compose.yml providing the production environment while development.yml would just add the correct environment variable value where needed.
In this situation it might be worth considering using an "onbuild" image to handle the commonalities among environments, then using separate images to handle the specifics. Some official images have onbuild versions, e.g., Node. Or you can create your own.