Configuration of dockerized applications - docker

How do you deal with host-specific configuration for docker containers?
Most importantly production passwords. They can't be put to container for security reasons. Also, they need to be changed on regular basis w/o redeploying anything.

I mostly use Volume containers if I really want to avoid making an image for that: http://docs.docker.com/userguide/dockervolumes/
however, I have done things like
FROM myapp
ADD password.file /etc
RUN do some cfg
to make specific images with password baked in.
That way the specific configuration list passwords is not in the general images which are re-used and known, but they are in an image that is build for and on the host that is running it. (this approach is not really very different to using puppet/chef etc to customise the files on a vm/server - it eventually needs to hit the filesystem in a safe, but repeatable way)

As one comment says, use environment variables you can pass via -e pass=abcdef. You can also save them in a fig.yml config file to avoid having to type them every time.

Related

Make use of .env variables in DDEV's config.yaml?

I'd like to be able to define the variable values in DDEV's version-controlled ./ddev/config.yaml file, using the non-version-controlled .env file. For example (pseudo-code):
name: env($PROJECT_NAME)
# etc...
I can't rely on remembering to swap out config.yaml files or any other manual steps.
The reason for the season is that I need to have multiple DDEV instances of the same site. Each instance would be committing to the same repo, but may (or may NOT) have different branches. In other words, they need to be capable of being merged with each other, without DDEV getting mixed up. Since I have .ddev/config.yaml committed to the repo, I need some other way of having separate DDEV instances.
You probably want to use config.*.yaml for this. By default, config.*.yaml are gitignored. For example, config.local.yaml might have local overrides. See the docs for more info.
I haven't experimented with using the .env file in this context, but I know that config.local.yaml will work fine for this use.

Do I have to rebuild my Docker image every time I want to make a change to app settings?

We distribute Docker images of our .Net Core Web API to clients.
By setting the ASPNETCORE_ENVIRONMENT environment variable to X in the client's Kubernetes Helm Chart, the correct environment settings in appsettings.X.json get picked up. This all works nicely.
But what happens if the client needs to change one of the settings in appsettings.X.json? We don't want them to rebuild the Docker image.
Can someone offer a better architecture here?
The most common practice is to get settings directly from the environment. Thus instead of a settings.json you would read from the environment (you could have defaults too. Another solution would be to use http://www.confd.io/

How do I serve static files from Docker in an environment-agnostic way?

I've been trying to set up a Docker environment, where the frontend (Angular/TypeScript in our case) is built in the Docker file and then served through Nginx.
We want to keep the Docker images environment-agnostic (so we can change our environment without having to rebuild the images). This means we can't build the code with environment variables. So far the options we see are:
inject the values on serving the code (probably some performance impact, don't know any neat way of doing this)
still build separate images (& drop our requirement)
have all environments compiled into the code with environment.<env>.ts files (not really agnostic images, just between a static set of environments)
Is there any recommended method of doing this? Am I missing something? I feel like this should be a solved problem, but I can't find anything on how to do this "properly".

Should I just have the needed environment or environment+the application itself in a Docker image

I've been playing with Docker for the last 5 days. And now that I'm comfortable with it, I wonder if I should put my application, e.g a war file, right when building it, in other words ADD it inside the Dockerfile, or I should just generate an image that will provide the necessary environment for my application? Is there a best practice? The second option seems to make more sense, If, for example you won't to automate the build and deployment in Jenkins. But still I would like to know.

developing several projects locally: How to configure environment variables

let's say I am developing 2 applications for 2 different clients which are, using 2 different database-configurations.
When using Openshift and CakePHP it is considered good practise to not store the real connection-info in the configs, but instead to use environment-variables.
That way the GIT-Repo is also always clean of server-specific stuff.
That is all fine as long as I have ONE project but as soon as there is another one, I need to override my local env-vars according to the current project.
Is there any way to avoid this? Is it possible to set up env-vars on my local machine per directory or something like that?
I am running OSX with Mamp Pro.
This may not be the best solution, but It would work. Create a different user on your local machine and then change to that other user when you need to access those other environment variables.
I create a 'data' directory in my git repo and set it to ignore. This way anything in there will be saved in the repo and sent to openshift. I place a config.ini file with all the info that I don't want in my repo.
I then manually put that config.ini file in Openshift's persistent DATA directory by using winSCP. You only have to do this when you change your config.ini.
When my app runs it detects if it's local or on Openshift and loads the config.ini file from the correct directory.
I would be interested if somebody has a better idea.
HTH

Resources