Establishing configurations in new containers - docker

I am setting up a brand new development environment, nginx+php-fpm and decided to create application containers (using docker) for each service.
Normally I would install nginx and php and modify the configuration (with and editor like vim), reload the services until the services were correctly configured.
To establish a similar procedure starting the initial container and copying the /etc/nginx to the host. Modify the config files in the host and use a docker file (containing another COPY) to test the changes.
Given that the containers are somewhat ephemeral and aren't really meant to contain utilities like vim I was wondering how people set up the initial configuration ?
Once I have a working config I know the options with regards to configuration management for managing the files. It's really the establishment of new containers that I'm curious about.

You can pass configuration through environment variables or mount a host file as a data volume.
Example for nginx configuration:
docker run --name some-nginx -v /yoursystem/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
You can use many images from Docker Hub as starting point: nginx-php-fmp.

Related

Docker setup but apache on local machine

I have a docker setup (php7,mysql and apache). This is working correctly.
However, I have to transfer the project on the server where there is already an apache running.
I was wondering how I could use the apache on the server to connect to my docker setup.
You can use either docker-compose or Dockerfile, or combination of both to use together.
You can read more about them in Docker Compose Docs and Dockerfile Docs.
In Simple Answer:
Create a docker-compose.yml file with contents you need as per above docs with a Dockerfile.
You should connect them together in your code by modifying files, like instead of localhost for database host, you should change it to the name you specify in docker-compose.yml file.
Also copying or adding some files in apache, like /etc/apache2/httpd.conf and /etc/apache2/sites-available/*.conf (all files ending with conf), and for mysql like /var/lib/mysql/ directory (database files), and of course your project files.
Then run docker-compose up -d command.

How do you configure a docker container during development time such that it can be deployed to kubernetes later

I'm configuring a docker container for development purposes with the intent to re-configure it (minimally) for k8s cluster deployment. Immediately I run into the issue of user permissions with volume mounts to my local source directory.
For deployment to the cluster I will bake my source directory into the image, which is really the only change I would want to make for deployment.
I've read many SO articles suggesting running as your local user/group id (1000/1000 in my case).
In docker, writing file to mounted file-system as non-root?
Docker creates files as root in mounted volume
Let non-root user write to linux host in Docker
Understanding user file ownership in docker: how to avoid changing permissions of linked volumes
Is it possible/sane to develop within a container Docker
But all of those questions seem to glance over a seemingly critical detail. When you use --user to alter your user ID within the docker container you lose root, and along with it a lot of functionality, for example whoami doesn't work. It seems to become very cumbersome to test configuration changes in the docker environment, which is common during development.
The options for developing directly into the docker container seem very limited:
Add user/group 1000/1000 to the docker image, which seems to violate the run-anywhere mantra of docker/kubernetes.
chown all your files constantly during development and use root in the container.
Are there other options to this list that is more palatable for developing directly into a docker container?

Docker container that depends on file in other container

I have monolithic application that i am trying to containerize. The foler structure is like this:
--app
|
|-file.py <-has a variable foo that is passed in
--configs
|
|-variables.py <- contains foo variable
Right now, I have the app in a container and the configs in a container. When I try to start up the app container, it fails because a dependency on the config container variable.
What am i doing wrong? And how should I approach this issue. Should the app and config be in one big container for now?
I was thinking docker-compose could solve this issue. Thoughts?
The variables.py file could be (in) a volume accessed by the app container that you import from the config container with --volumes-from config option to docker run. With Docker Compose you would use the volumes_from directive.
Less recommended way -
Run the Config Container first, it will have its own docker.sock.
You can mount the above Docker Socket in first app Container via -v "/var/run/docker.sock:/var/run/docker.sock", which will let the App container access the Config container, but I think this will need privileged access.
This is similar to Docker in Docker concept.
You can also consider design changes to your application by serving that foo variable over HTTP, which will result in much simpler solution. You can use simple web server and urllib3 module in Python to have a simple solution which will serve the variable via Internal Docker Networking.

How can I provide application config to my .NET Core Web API services running in docker containers?

I am using Docker to deploy my ASP.NET Core Web API microservices, and am looking at the options for injecting configuration into each container. The standard way of using an appsettings.json file in the application root directory is not ideal, because as far as I can see, that means building the file into my docker images, which would then limit which environment the image could run in.
I want to build an image once which can they be provided configuration at runtime and rolled through the dev, test UAT and into Production without creating an image for each environment.
Options seem to be:
Providing config via environment variables. Seems a bit tedious.
Somehow mapping a path in the container to a standard location on the host server where appsettings.json sits, and getting the service to pick this up (how?)
May be possible to provide values on the docker run command line?
Does anyone have experience with this? Could you provide code samples/directions, particularly on option 2) which seems the best at the moment?
It's possible to create data volumes in the docker image/container. And also mount a host directory into a container. The host directory will then by accessible inside the container.
Adding a data volume
You can add a data volume to a container using the -v flag with the docker create and docker run command.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp.
Mount a host directory as a data volume
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
Refer to the Docker Data Volumes
We are using other packaging system for now (not docker itself), but still have same issue - package can be deployed in any environment.
So, the way we are doing it now:
Use External configuration management system to hold and manage configuration per environment
Inject to our package the basic environment variables to hold the configuration management system connection details
This way we are not only allowing the package to run in almost any "known" environment, but also run-time configuration management.
When you are running docker, you can use environment variable options of the run command:
$ docker run -e "deep=purple" ...

Packaging an app in docker that can be configured at run time

I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run

Resources