Making a karaf/docker service read a configuration file - docker

Starting from the io8 maven archetype, I'm setting up a Docker container containing a Karaf container containing a CXF restful web service.
I want this to read a file when it starts up that parameterizes it. What's the procedure for setting up (a) a Docker container so that it can receive a config file when launched, and (b) finding that file from inside?

To provide a configuration file to a container you can use something like the following:
docker run -d -v /path/to/your/config.file:/path/inside/the/container/config.file yourimage
For more information please refer the official documentation on how to use volumes.

Related

Starting a Docker file with Docker.DotNet

Is there anyway to start a Docker container defined by a Docker file using Docker.DotNet? I found the BuildImageFromDockerfileAsync method, but that seems to be expecting a stream of a tar file. I just want to start a Docker file, or even better, a Docker compose file.
Docker.DotNet is only a .NET wrapper for the Docker RESTful API. It does not allow you to use Docker CLI tools programatically.

Docker setup but apache on local machine

I have a docker setup (php7,mysql and apache). This is working correctly.
However, I have to transfer the project on the server where there is already an apache running.
I was wondering how I could use the apache on the server to connect to my docker setup.
You can use either docker-compose or Dockerfile, or combination of both to use together.
You can read more about them in Docker Compose Docs and Dockerfile Docs.
In Simple Answer:
Create a docker-compose.yml file with contents you need as per above docs with a Dockerfile.
You should connect them together in your code by modifying files, like instead of localhost for database host, you should change it to the name you specify in docker-compose.yml file.
Also copying or adding some files in apache, like /etc/apache2/httpd.conf and /etc/apache2/sites-available/*.conf (all files ending with conf), and for mysql like /var/lib/mysql/ directory (database files), and of course your project files.
Then run docker-compose up -d command.

config file changes to VerneMQ docker image

I want to make some changes to the config file of the VerneMQ image running on docker. Is there any way to reach the config file so that changes could be made?
If you exec into the container docker exec -it <containerID> bash, you'll see that the vernemq.conf file is located under /etc/vermnemq/. Its just the matter of replacing this default conf by your own config file. Keep your vernemq.conf in same directory as where Dockerfile is and then add
following line into Dockerfile
COPY vernemq.conf /etc/vernemq/vernemq.conf
The above line copies your config file into container at given location and replaces the existing one. Finally build the image. For more advanced stuff, do checkout this!
Another approach could be to simply set your options as environment variables for the docker image.
From the official docker hub page:
VerneMQ Configuration
All configuration parameters that are available in vernemq.conf can be
defined using the DOCKER_VERNEMQ prefix followed by the confguration
parameter name. E.g: allow_anonymous=on is -e
"DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" or
allow_register_during_netsplit=on is -e
"DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT=on". All available
configuration parameters can be found on
https://vernemq.com/docs/configuration/.
This is especially useful for compose-like yml-based deployments.
You can create a new Dockerfile to modify image contents -
FROM erlio/docker-vernemq
RUN Modify Command
Use the new Dockerfile to build new image & run container using that.

How can I provide application config to my .NET Core Web API services running in docker containers?

I am using Docker to deploy my ASP.NET Core Web API microservices, and am looking at the options for injecting configuration into each container. The standard way of using an appsettings.json file in the application root directory is not ideal, because as far as I can see, that means building the file into my docker images, which would then limit which environment the image could run in.
I want to build an image once which can they be provided configuration at runtime and rolled through the dev, test UAT and into Production without creating an image for each environment.
Options seem to be:
Providing config via environment variables. Seems a bit tedious.
Somehow mapping a path in the container to a standard location on the host server where appsettings.json sits, and getting the service to pick this up (how?)
May be possible to provide values on the docker run command line?
Does anyone have experience with this? Could you provide code samples/directions, particularly on option 2) which seems the best at the moment?
It's possible to create data volumes in the docker image/container. And also mount a host directory into a container. The host directory will then by accessible inside the container.
Adding a data volume
You can add a data volume to a container using the -v flag with the docker create and docker run command.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp.
Mount a host directory as a data volume
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
Refer to the Docker Data Volumes
We are using other packaging system for now (not docker itself), but still have same issue - package can be deployed in any environment.
So, the way we are doing it now:
Use External configuration management system to hold and manage configuration per environment
Inject to our package the basic environment variables to hold the configuration management system connection details
This way we are not only allowing the package to run in almost any "known" environment, but also run-time configuration management.
When you are running docker, you can use environment variable options of the run command:
$ docker run -e "deep=purple" ...

Establishing configurations in new containers

I am setting up a brand new development environment, nginx+php-fpm and decided to create application containers (using docker) for each service.
Normally I would install nginx and php and modify the configuration (with and editor like vim), reload the services until the services were correctly configured.
To establish a similar procedure starting the initial container and copying the /etc/nginx to the host. Modify the config files in the host and use a docker file (containing another COPY) to test the changes.
Given that the containers are somewhat ephemeral and aren't really meant to contain utilities like vim I was wondering how people set up the initial configuration ?
Once I have a working config I know the options with regards to configuration management for managing the files. It's really the establishment of new containers that I'm curious about.
You can pass configuration through environment variables or mount a host file as a data volume.
Example for nginx configuration:
docker run --name some-nginx -v /yoursystem/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
You can use many images from Docker Hub as starting point: nginx-php-fmp.

Resources