Can Docker's COPY or RUN cp be used in a Dockerfile to overwrite a default config file with a docker-specific version of the file?
In a Rails project, our config folder has multiple versions of database.yml for different environments:
# projectname/config/
database.yml # an unused default placeholder
database_for_docker_2.yml
database_for_vagrant.yml
For different dev environments (vagrant+virtualbox vs docker) during initialization of the machine/container we copy the appropriate version of the .yml to database.yml
In the Dockerfile, after this section:
WORKDIR /my_app
RUN bundle install
COPY . /my_app
we tried:
RUN cp ./config/database_docker_2.yml /my_app/config/database.yml
but the file does not seem to be copied, the default version of database.yml is used when we spin up the container.
we then tried:
COPY ./config/database_docker_2.yml /my_app/config/database.yml
the file still does not seem to be copied, the default version of the file gets used when we spin up the container.
What DOES work is adding another entry to the volume section of docker-compose.yml specifically for that one file:
volumes:
- .:/my_app
- ./config/database_docker_2.yml:/my_app/config/database.yml
but we prefer to manage the placement of env-specific versions of files in the Dockerfile (as opposed to littering the docker-compose.yml with such env-specific files)
The command COPY ./config/database_docker_2.yml /my_app/config/database.yml probably works, there is no reason it shouldn't assuming the source exists.
What I suspect happens, is that when you are testing it, you already have a volume with .:/my_app, which then shows you the local folder, and not the in-container folder.
Run it without the volume, and I believe you will in fact see that it copied it into the container, as you intended.
On a side note:
If you are not yet locked in your way of handling this multiple database config, I would consider re-evaluating your situation, and trying to find a solution that does not require you to change database.yml for each environment. One way, would be to have the database.yml use an environment variable (usually DATABASE_URL) and then you have one docker-compose for all, and one database.yml for all, and you only configure environment with environment variables.
Related
So I have been tasked with taking an existing dockerized version of a service, and creating docker images from this repository.
Creating the images is not the problem however, since the build command starts it up no problem. The issue is that this dockerfile copies an .env file during build, that holds variables that must be customizable after the build process is done (expected db and other endpoint info).
Is there some way to set that file to automatically be changed to reflect the environmental variables used in the docker run command? (I do want to note, that the docker image does copy the .env file into the working directory, it is not docker-compose reading that .env file)
I am sure that there has to be an easy way to do this, but all the tutorials I am pulling up just show you how to declare these variables, not how to get the files in docker to use them! Most of the code being run is javascript, and uses npm and yarn if that makes any difference...
docker does not provide any way to update files from environment variables on container start. But I don't think this is what you need anyway:
As I understand a .env file with default values is copied into the image at build time and you want to be able to change some of the values at runtime via container environment variables?
Usually such an .env file is read by the application and complemented by any variables set in the environment, i.e. you can override values from the file with environment variables. For javascript projects dotenv is a popular module to do this.
So to override say an API_ENDPOINT variable specified in .env you simply need to pass an environment variable with the same name and desired value to the container:
docker run -e API_ENDPOINT=/other/endpoint ...
If for some reason your applications do not work according to this convention and you actually need to change the values in the .env file you will need to write a custom script that updates/generates .env from the values of passed environment variables and use this script as ENTRYPOINT
I have a very strange issue with copying the contents of subdirectories to a Docker container.
This is the directory structure:
Note: There are two Dockerfiles, I use the one on the upper level for test purposes. Ignore the one in the WebApp folder.
I want to copy the directories Bilder and JSON to the container, including all contents, but it doesn't work. The folders in the container will be empty. However, copying the Testdir does indeed work.
This is part of my Dockerfile:
FROM python:3.7-buster
# -- Init --
RUN mkdir -p /app/src
WORKDIR /app/src
ADD WebApp/Testdir ./Testdir #works
ADD WebApp/Bilder ./Bilder #doesn't work
CMD ["sleep", "50"] #to check contents
I build the image as part of a docker-compose.yml file with
docker-compose build test
Does anyone have a clue what's happening here? I've been searching for a solution for quite some time...
If anyone is interested by why this was a problem: it actually had nothing to do with Docker. I was working on a cluster that was not synchronizing my local files to the server correctly, so I solved this issue by checking every time whether the files were actually copied from my local machine to the cluster. Just in case someone has a similar issue, you might be advised to check whether the file accessibility could be the problem.
While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)
I'm using DreamFactory REST API in a Docker container and I need to disable wrapper "resource" in payload. How can I achieve this?
I have replaced the following in all of these four files:
opt/bitnami/dreamfactory/.env-dist
opt/bitnami/dreamfactory/vendor/dreamfactory/df-core/config/df.php
opt/bitnami/dreamfactory/installer.sh
bitnami/dreamfactory/.env
DF_ALWAYS_WRAP_RESOURCES=true
with:
DF_ALWAYS_WRAP_RESOURCES=false
but this doesn't fix my problem.
The change you describe is indeed the correct one as found in the DreamFactory wiki. Therefore I suspect the configuration has been cached. Navigate to your DreamFactory project's root directory and run this command:
$ php artisan config:clear
This will wipe out any cached configuration settings and force DreamFactory to read the .env file in anew. Also, keep in mind you only need to change the .env file (or manage your configuration variables in your server environment). Those other files won't play any role in configuration changes.
I followed this docker-compose tutorial on howto start a rails app.
It runs perfectly but the app isn't reloaded when I change a controller.
What can it be missing?
I was struggling with this as well, there are 2 things that you need to do:
Map the current directory to the place where Docker is currently hosting the files.
Change the config.file_watcher to ActiveSupport::FileUpdateChecker
Step 1:
In your Dockerfile, check where are you copying/adding the files.
See my Dockerfile:
# https://docs.docker.com/compose/rails/#define-the-project
FROM ruby:2.5.0
# The qq is for silent output in the console
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs vim
# Sets the path where the app is going to be installed
ENV RAILS_ROOT /var/www/
# Creates the directory and all the parents (if they don't exist)
RUN mkdir -p $RAILS_ROOT
# This is given by the Ruby Image.
# This will be the de-facto directory that
# all the contents are going to be stored.
WORKDIR $RAILS_ROOT
# We are copying the Gemfile first, so we can install
# all the dependencies without any issues
# Rails will be installed once you load it from the Gemfile
# This will also ensure that gems are cached and onlu updated when
# they change.
COPY Gemfile ./
COPY Gemfile.lock ./
# Installs the Gem File.
RUN bundle install
# We copy all the files from the current directory to our
# /app directory
# Pay close attention to the dot (.)
# The first one will select ALL The files of the current directory,
# The second dot will copy it to the WORKDIR!
COPY . .
The /var/www directory is key here. That's the inner folder structure of the image, and where you need to map the current directory to.
Then, in your docker-compose, define an index called volumes, and place that route there (Works for V2 as well!):
version: '3'
services:
rails:
# Relative path to Dockerfile.
# Docker will read the Dockerfile automatically
build: .
# This is the one that makes the magic
volumes:
- "./:/var/www"
The image above is for reference. Check that the docker-compose and Dockefile are in the same directory. They not necessarily need to be like this, you just have to be sure that the directories are specified correctly.
docker-compose works relative to the file. The ./means that it will take the current docker-compose directory (In this case the entire ruby app) as the place where it's going to host the image's content.
The : just a way to divide between the where it's going to be vs where the image has it.
The next part: /var/www/ is the same path you specified in the Dockerfile.
Step 2:
Open development.rb (Can be found in /config/environments)
and look for config.file_watcher, replace:
config.file_watcher = ActiveSupport::EventedFileUpdateChecker
for:
config.file_watcher = ActiveSupport::FileUpdateChecker
This would do a polling mechanism instead.
If that doesn't work, try adding the following line in the environment file, in this case development.rb:
config.cache_classes = false
That's it!
Remember, that anything that is not routes.rb, and it's not inside the app folder, it's highly probable that the Rails app is going to need to be reloaded manually.
Incrementing #Jose A's answer, I changed the property config.cache_classes inside development.rb to false and it solved the problem. Following, its explanation:
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = false
Try to rebuild the image with the next command to add the changes to the dockerized app:
docker-compose build
And after it you need to restart the app with docker-compose up to recreate the docker container for your app.
You should create a volume that maps your local/host source code with the same code located inside docker in order to work on the code and enable such features.
Here's an example of a (docker-compose) mapped file that I'm updating in my editor without having to go through the build process just to see my updates:
volumes:
- ./lib/radius_auth.py:/etc/freeradius/radius_auth.py
Without mapping host <--> guest, the guest will simply run whatever code it has received during the build process.
i got same issue, for someone, i dont know your issue is same mine, but i hope you can fix, for me i could fix when i did that.
1, Restart mac
2, Delete application.yml
3, make
4, make up