I have .env file for my docker-compose, and was able to run using "docker-compose up"
Now I pushed to cloud registry, and want to Cloud Run
How can I supply the various environemnt variables?
I did create secrets in secret manager, but how can I integrate both, so that my container starts reading all those needed secrets?
Note: My docker-compose is an app with database, but I can split them as 2 containers, if needed, but they still need secrets
Edit: Added secret references.
EDIT:
I am unable to run my container
If env file X=x , and docker-compose environemnt app.prop=${X}
then should I create secret X or x?
Is Cloud run using Dockerfile or docker-compose? I image pushed is built from docker-compose only. Sorry I am getting confused (not assuming trivial things as it is not working)
It is not possible to use docker-compose on Cloud Run, as it is designed for individual stateless containers. My suggestion is to create an image from your application service, upload the image to Google Container Registry in order to use it for your Cloud Run service, and connect it to Cloud SQL following the attached documentation. You can store database credentials with Secret Manager and pass it to your Cloud Run service as environment variables (check this documentation).
Related
The newer docker compose (vs docker-compose) allows you to set secrets in the build section. This is nice because if you do secrets at runtime then the file is readable by anyone that can get into the container by reading /run/secrets/<my_secret>.
Unfortunately, it appears that it's only possible to pass the secrets via either the environment or a file. Doing it via the environment doesn't seem like a great idea because someone on the box could read the /proc/<pid>/environment while the image is being built to snag the secrets. Doing it via a file on disk isn't good because then the secret is being stored on disk unencrypted.
It seems like the best way to do this would be with something like
docker swarm init
$(read -sp "Enter your secret: "; echo $REPLY) | docker secret create my_secret -
docker compose build --no-cache
docker swarm leave --force
Alas, it appears that Docker can't read from the swarm for build time secrets for some unknown reason.
What is the best way to do this? This seems to be a slight oversight, along the lines of docker secrete create not having a way to prompt for the value instead of having to resort to to hacks like above to keep the secret out of your bash history.
UPDATE: This is for SWARM/Remote docker systems, not targeted on local build time secrets. (I realised you were asking for those primarily and just mentioned swarm in the second part of the question. I believe it still holds good advice for some so ill leave the answer undeleted.
Docker Swarm can only read runtime-based secrets you create with the docker secret create command and must already exist on the cluster when deploying stack. We had been in the same situation before. We solved the "issue" using docker contexts. You can create an SSH-based docker context which points to a manager (we just use the first one). Then on your LOCAL device (we use Win as the base platform and WSL2/Linux VM for the UNIX part), you can simply run docker commands with inline --context property. More on context on official docs. For instance: docker --context production secret create .... And so on.
I want to deploy an application using docker-compose inside an EC2 host.
For reasons beyond the scope of this question, one of the services will use a constant docker tag, as in myrepo/myimage:stable.
Periodically, the image will be updated (same tag, different hash) so I will need to run docker-compose pull && docker-compose up -d.
My question is whether there is a way of exposing docker-compose's API so that this can be invoked using an api call to the EC2 instance so as to avoid having to ssh into the machine.
Compose doesn't have an API per se, it is just a local command-line tool. You need to use something like ssh, or a generic system-automation tool like Ansible or Salt Stack, to invoke it.
Amazon's hosted container-cluster systems do have network-accessible APIs. If you use EKS, you can use the Kubernetes API to update a Deployment spec's image:. Amazon's proprietary ECS system has a different API, but again you can use it to remotely update the image name without having direct access to the underlying node(s).
In all cases you will be better off if you can use a unique tag per build. In a Compose setup you could supply this via an environment variable
image: myrepo/myimage:${TAG:-stable}
and then deploy it with
ssh root#remote-host TAG=20210414 docker-compose up -d
Since each build would have a distinct tag/name, you don't need to explicitly docker-compose pull; Docker will know to pull an image that it doesn't already have locally.
In a Kubernetes/EKS context in particular, it's important that the image: value changes to force an update (or downgrade!); if you tell Kubernetes that you want to run a Pod with the stable tag, and it already has one, it won't change anything.
I have setup an instance of Hashicorp Vault. I have successfully written and read secrets to and from it. Getting Vault up and running and is the easy part. Now, how do I use Vault as a store to replace the .env file in docker-compose.yml? How do I read secrets from Vault in all of my docker-compose files?
Even more difficult: how do I dynamically generate keys to access access the secrets in Vault, then use those keys in my docker-compose.yml files, without editing those files each time I restart a stack? How is that process automated? In short, just exactly how can I leverage Hashicorp Vault to secure the secrets that are otherwise exposed in the .env files?
I have read all of their literature and blog posts, and haven't been able to find anything that outlines that process. I am stuck and any tips will be greatly appreciated.
Note: This is not a question about running a Hashicorp Vault container with docker-compose, I have successfully done that already.
Also Note: I cannot modify the containers themselves; I can only modify the docker-compose.yml file
You would need to query the vault API to populate either your .env file or in the entrypoint of your container. My preference would be the container entrypoint at worst, and ideally directly in your application. The reason is because vault secrets could be short lived, and any container running for longer than that period would need to refresh it's secrets.
If you go with the worst case of doing this in the entrypoint, there are a few tools that come to mind. confd from Kelsey Hightower, and gomplate.
confd can run as a daemon and restart your app inside the container when the configuration changes. My only concern is that it is an older and less maintained project.
gomplate would be run by your entrypoint to expand a template file with the needed values. That file could just be an env.sh that you then source into your environment if you needed env vars. Or you can run it within your command line as a subshell, e.g.
your-app --arg "$(gomplate ...sometemplate...)"
If you only use these tools to set the value once and then start your app, make sure to configure a healthcheck and/or graceful exit your app when the credentials expire. Then run your container with orchestration (Kubernetes/Swarm Mode) or set a restart policy so that it restarts after any credentials expire to get the new credentials.
Do they use environment / config variables to link the persistent storage to the project related docker image ?
So that everytime new VM is assigned, the cloud shell image can be run with those user specific values ?
Not sure to have caught all your questions and concerns. So, Cloud Shell is in 2 parts:
The container that contains all the installed library, language support/sdk, binaries (docker for example). This container is stateless and you can change it (in the setting section of Cloud Shell) if you want to deploy a custom container. For example, it's what is done with Cloud Run Button for deploying a Cloud Run service automatically.
The volume dedicated to the current user that is mounted in the Cloud Shell container.
By the way, you can easily deduce that all you store outside the /home/<user> directory is stateless and not persist. /tmp directory, docker image (pull or created),... all of these are lost when the Cloud Shell start on other VM.
Only the volume dedicated to the user is statefull, and limited to 5Gb. It's linux environment and you can customize the .profile and .bash_rc files as you want. You can store keys in /.ssh/ directory and all the other tricks that you can do on Linux in your /home directory.
Definitely missing something, and could use some quick assistance!
Simply, how do you deploy a Docker image to an Airflow DAG for running jobs? Does anyone have a simple example of deploying a Google container and running it via Airflow/Composer?
You can use the Docker Operator, included in the core Airflow repository.
If pulling an image from a private registry, you'll need to set a connection config with the relevant credentials and pass it to the docker_conn_id param.