I build a RoR app on Heroku that must be run inside a Docker container. To do so I use the official Dockerfile. As it is very common with Heroku, I need a few add-ons to make this app fully operational. In production the variable DATABASE_URL is available within my app. But if I try some other add-ons that use environment variables (Mailtrap in my case), variables aren't copied into the instance during runtime.
So my question is simple: how can I make docker instances aware of the environment variables when executed on Heroku?
As you may ask, I already know that we can specified an environment directive right in docker-compose.yml. I would like to avoid that in order to be able to share this file through the project repository.
I didn't find any documentation about it but t appears that Heroku change very recently the way it handles config vars in Docker containers: they are now replicated automatically (values from docker-compose.yml are simply ignored).
The workaround to not commit sensitive config files, would be to create a docker-compose.yml**.example** with empty fields and commit it, then add docker-compose.yml to .gitignore.
Since that's not very practical on heroku, you can use the --env docker switch to add any variable to the container's environment.
Like this: docker run --env "MY_VAR=yolo" my_image:my_tag
You could also serve a private docker-config.yml from a secure site, that heroku would have access to (that would be my preferred solution in your case).
Related
I've got an ASP.NET core 2.2 web app that I have enabled docker support for. I have created a test app here for review here.
I am running it in VS with Docker locally. I want to add environment variables/secrets to the app settings secrets in order to override the values in the appsettings.json file. In order to do this locally, I have tried changing values in:
launchsettings.json
Dockerfile
however, for both of these, when I attach to my docker instance and printenv the variable values, I find that the variable for ASPNETCORE_ENVIRONMENT still shows up as Development.
I am attaching to the running container like this:
docker exec -t -i 4c05 /bin/bash
I have searched all files in my solution. I can't find ASPNETCORE_ENVIRONMENT being set to Development anywhere in the solution. However, somehow, the environment variable is still being set with that value.
What could be going wrong? I want that variable to change. Once working, what I really want to do is to add a connection string secret to environment variables so that it can be used locally via the appsettings.json file or via a docker secret environment variable if the aspnetcore web app is running in a container. I think I've got this code working, it's just that the variables are not being deployed as expected to the running container.
My VS version is:
thanks
Mmm - seems there is a problem with DockerFile support in VS. However, when I use the Orchestration Support, using docker-compose, the functionality works as expected, so I'm answering the question myself :-)
I have a Python API that has to know its public address to properly create links to itself (needed when doing paging and other HATEOAS stuff) in the responses it creates. The address is given to the application as an environment variable.
In production it's handled by Terraform, but I also have extensive local tests that make use of Docker Compose. In tests for paging I need to be aware of the fact that I'm running locally and I need to replace the placeholder address I'm putting in the app's env with http://localhost:<apps_bound_port> for following the links.
I don't want to do that. I'd like to have a way to put the port assigned by Docker in the app's environment variables. The problem wouldn't be there if I was using fixed ports (then I could just put something like http://localhost:8000 in the public addres variable), because I can have multiple instances of Compose running, which wouldn't work then.
I know I can pass environment variables from the shell running docker-compose to the containers, but I don't know of a way to insert the generated port using this approach.
Only solution that I have for my problem now is to find a free port before Compose runs, and then pass it as an environment variable (API_PORT=<FREE_PORT> docker-compose up), while setting up the port like this in docker-compose.yml:
ports:
- "8000:${API_PORT}"
This isn't ideal, because I run Compose both from the shell (with make) and from Python tests, so I'd need to put the logic for getting the port into an env variable in both places.
Is there something I'm missing, or should I create a feature request for Docker Compose?
I currently have a locally tested and working web app that consists of 4 docker containers: Java MVC, NodeJS, Flask, and MongoDB. I have 4 Dockerfiles, one for each, and I manage the builds with docker-compose.yml.
However, now I want to push my code to Heroku and I read the documentation at https://devcenter.heroku.com/articles/container-registry-and-runtime. However, it seems very ambigious about how to use docker-compose on the production line. This is what it says on the docs:
"If you’ve created a multi-container application you can use Docker Compose to define your local development environment. Learn how to use Docker Compose for local development."
Can anyone guide me to some actual code of how I can push my project to the Heroku Container using Heroku's CLI?
Just an update on this question since it seems to be getting a lot of traction lately.
There is now an officially supported "Heroku.yml" solution offered by Heroku.
You can now write a .yml file (with a format similar to docker-compose) and Heroku will work out your images. Just follow the link above for details.
Happy Heroku-ing.
The more accurate heroku documentation for what you are looking to do is here:
https://devcenter.heroku.com/articles/container-registry-and-runtime
The above will walk you through setting up the heroku container plugin and logging into the registry. You can even migrate an image to a Dockerfile with the following line in your dockerfile:
FROM "<insert Dockerfile tag here>"
To easily set this up, you will name your Dockerfiles with different suffixes, such as Dockerfile.mongo, Dockerfile.node, Dockerfile.flask, and Dockerfile.javamvc. The suffix tells heroku the dyno name used for your web app. When you need to push all of your containers, you can do so with the following command, which will recursively build all dockerfiles as long as all of them have unique suffixes:
heroku container:push --recursive
As Heroku doesn't read docker-compose files, any environment variable setup/port exposure/etc will need to be migrated to the Dockerfile. Also as I can't find how to do persistent storage/volume mounting with containers on Heroku, I would recommend using a Heroku add-on for your mongo database.
On Heroku, you will see your app running as one dyno per Dockerfile, with each dyno's name as the suffix of each Dockerfile.
UPDATE:
Travis brings up a good point. Make sure to have a CMD statement in your Dockerfile, otherwise heroku will throw an error.
Heroku recently added a step to the process, you will need to run heroku container:release <your dyno name> for each dyno that you want to update.
Yet another update on this question, as I was looking into it and found out that Heroku now officially supports docker-compose.
Please follow this guide: Local Development with Docker Compose
Worth noting that, as Heroku is non-persistent, the guide above recommends you to use official docker images of (redis, postgres, etc.) for local development, but use Heroku's offerings when deploying on it.
I plan to use docker to build my dev and production environment. I build Django based app.
On dev I use docker-compose to mange all local containers. It's a nice and convenient solution. I run Django, 3 celery queues, rabbitmq, 2 postgresql DBs.
But my production environment is quite different. I need to run gunicorn and nginx. Moreover DBs will be ran using AWS RDS. Of course Django app will require more stuff, like different settings file or more env vars.
I'm wandering how to divide it. Should I docker-compose there as well? That will require separate files for dev and prod, maybe more in future for staging etc... If yes, how to deploy it? Using Jenkins, pull, restart all using compose?
Or maybe I should use ansible to run docker commands directly? But then I have no confidence that my dev is the same as live and it's harder to predict its behaviour.
I like the idea of running compose files on all environments, but I'm not sure if maintaining multiple files for different environments is a good idea. Dev requires less env vars and less configuration. I can use env file to set all of them on production. But should I keep my live settings in the se repo? Previously I was setting all env vars while provisioning and this was separate process. Now it looks like provisioning and deploy are the same? Maybe this is the way with Docker?
Using http://docs.docker.com/compose/extends/#multiple-compose-files you can keep all the common stuff in a docker-compose.yml and use docker-compose.prod.yml to add extra services, change links, environment, and ports.
I've built a simple Docker Compose project as a development environment. I have PHP-FPM, Nginx, MongoDB, and Code containers.
Now I want to automate the process and deploy to production.
The docker-compose.yml can be extended and can define multiple environments. See https://docs.docker.com/compose/extends/ for more information.
However, there are Dockerfiles for my containers. And for the dev environment are needed more packages than in production.
The main question is should I use separate dockerfiles for dev and prod and manage them in docker-compose.yml and production.yml ?
Separate dockerfiles are easy approach but there is code duplication.
The other solution is to use environment variables and somehow handle them from bash script (maybe as entrypoint ?).
I am searching for other ideas.
According to the official docs:
... you’ll probably want to define a separate Compose file, say
production.yml, which specifies production-appropriate configuration.
Note: The extends keyword is useful for maintaining multiple Compose
files which re-use common services without having to manually copy and
paste.
In docker-compose version >= 1.5.0 you can use environment variables, may be this suits you?
If the packages needed for development aren't too heavy (i.e. the image size isn't significally bigger) you could just create Dockerfiles that include all the components and then decide whether to activate them based on the value of an environment variable in the entrypoint.
That way you would could have the main docker-compose.yml providing the production environment while development.yml would just add the correct environment variable value where needed.
In this situation it might be worth considering using an "onbuild" image to handle the commonalities among environments, then using separate images to handle the specifics. Some official images have onbuild versions, e.g., Node. Or you can create your own.