Docker retain ENV variables - docker

I am running a docker container using --env VAR="foo" to set a few variables. When I run commands on this running container from the same shell / environment that I started the container, everything is fine.
The problem is that now I want to run commands against this container from cron. When cron runs the same command, the ENV variables in the container no longer exist.
How can I persist these ENV variables in the container regardless of where it is accessed from?
Edit:
To clarify, I docker run from a standard shell. In the cron job, I use docker exec and that is when the ENV vars disappear.
I have also noticed that some host machines I can't do any exec's on docker containers from a cron job.

I presume you use docker run inside your cron task.
If that's the case that's normal. You are starting a new container from the same image.
If you want to use the same container (with all your env variables set), you can use docker exec.
https://docs.docker.com/reference/commandline/exec/

Related

Run an AirFlow task in another Docker container

I am considering implementing AirFlow and have no prior experience with it.
I have a VM with docker installed, and two containers running on it:
container with python environment where cronjobs currently run
container with an AirFlow installation
Is it possible to use AirFlow to run a task in the python container? I am not sure, because:
If I use the BashOperator with the command like docker exec mycontainer python main.py, I assume it will mark this task as success, even if the python script fails (it successfully run the command, but its resposibility ends there).
I see there is a DockerOperator, but it seems to take an image, create and run a container, but I want to run a task on a container that is already running.
The closest answer I found is using kubernetes here, which is overkill for my needs.
The BashOperator runs the bash command on:
the scheduler container if you use the LocalExecutor
one of the executors containers if you use the CeleryExecutor
a new separate pod if you use the KubernetesExecutor
While the DockerOperator is developed to create a new docker container on a docker server (local or remote server), and not to manage an existing container.
To run a task (command) on an existing container (or any other host), you can setup a ssh server within the python docker container, then use the sshOperator to run your command on the remote ssh server (the python container in your case).

Airflow run bash command on an existing docker container

I have Airflow running in a Docker container and I want to trigger a python script that resides in another container. I tried the regular bash operator but that seems to be only for local. Also looked at the Docker operator but that one seems to want to create a new container.
The airflow container must be able to access the python script to be executed. If the script is in another container, either you mount a volume that airflow can access it or you can execute DAG with KubernetesPodOperator.

How can I run script automatically after Docker container startup without altering main process of container

I have a Docker container which runs a web service. After the container process is started, I need to run a single command. How can I do this automatically, either by using Docker Compose or Docker?
I'm looking for a solution that does not require me to substitute the original container process with a Bash script that runs sleep infinity etc. Is this even possible?

How to use export command to set environment variable with docker exec?

I have a running docker container using an ancestor my_base_image. Now when the container is running, can I set an environment variable using export command with docker exec? if yes, how?
I tried using the following, but doesn't work
docker exec -i -t $(docker ps -q --filter ancestor=`my_base_image`) bash -c "export my_env_var=hey"
Basically I want to set my_env_var=hey as env variable inside docker container. I know this can be done in may ways using .env_file or env key docker-compose & ENV in Dockerfile. But I just want to know if it is possible using docker exec command
This is impossible. A process can never change the environment of any other process beyond itself, except that it can specify the initial environment of processes it starts itself. In this case, your docker exec shell isn’t launching the main container process, so it can’t change that process’s environment variables.
This is one of a number of changes that you will need to stop, delete, and recreate the container for. You should treat this as extremely routine container maintenance and plan to delete the container eventually. That means, for example, keeping any data that needs to be persisted outside the container, ideally in an external database but possibly in a mounted volume.

Reference env variable from host at runtime in "env-file" to be passed to docker image

Is there a syntax to reference an environment variable from the host in a Docker env-file.
Specifically I'd like to do something like DOCKER_HOST=${HOSTNAME} where HOSTNAME would come the environment of the machine hosting the docker image.
The above doesn't get any attempt at replacement whatsoever and gets passed into the Docker image literally as ${HOSTNAME}.
This is generally not done at the image level, but at runtime, on docker run:
See "How to get the hostname of the docker host from inside a docker container on that host without env vars"
docker run .. -e HOST_HOSTNAME=$(hostname) ..
That does use an environment variable.
You can do so without environment variables, using -h
docker run -h=$(hostname)
But that does not work when your docker run is part of a docker compose. See issue 3840.

Resources