DreamFactory how to disable wrapper "resource" in docker container - docker

I'm using DreamFactory REST API in a Docker container and I need to disable wrapper "resource" in payload. How can I achieve this?
I have replaced the following in all of these four files:
opt/bitnami/dreamfactory/.env-dist
opt/bitnami/dreamfactory/vendor/dreamfactory/df-core/config/df.php
opt/bitnami/dreamfactory/installer.sh
bitnami/dreamfactory/.env
DF_ALWAYS_WRAP_RESOURCES=true
with:
DF_ALWAYS_WRAP_RESOURCES=false
but this doesn't fix my problem.

The change you describe is indeed the correct one as found in the DreamFactory wiki. Therefore I suspect the configuration has been cached. Navigate to your DreamFactory project's root directory and run this command:
$ php artisan config:clear
This will wipe out any cached configuration settings and force DreamFactory to read the .env file in anew. Also, keep in mind you only need to change the .env file (or manage your configuration variables in your server environment). Those other files won't play any role in configuration changes.

Related

Provide GitHub file as default conf file in a docker-compose volume

So my question is whether it is possible to have a volume like:
"${my_conf_file}:-raw.my/GitHub/file.git":/conf.json
So this would be my goal, however I do not find anything related to this. In the end if the user has a file, the file should be passed, otherwise either conf.json should not be replaced by anything (because the GitHub file is already there, to be replaced by a conf file that a user might have) or the file from GitHub should be passed again.
If it best to figure out the first part ("${my_conf_file}:-raw.my/GitHub/file.git") ahead of the docker run.
In your start script (which calls docker run or uses your docker-compose.yml), add a script able to determine which config file you want (the user's, conf.json itself or the one from GitHub)
Once you can script that, then you can add your docker run -v call, which will mount the right file to :/conf.json in the container.

Automatically Configure Config inside Docker Container

While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)

Docker, how to COPY docker-specific versions of files to WORKDIR

Can Docker's COPY or RUN cp be used in a Dockerfile to overwrite a default config file with a docker-specific version of the file?
In a Rails project, our config folder has multiple versions of database.yml for different environments:
# projectname/config/
database.yml # an unused default placeholder
database_for_docker_2.yml
database_for_vagrant.yml
For different dev environments (vagrant+virtualbox vs docker) during initialization of the machine/container we copy the appropriate version of the .yml to database.yml
In the Dockerfile, after this section:
WORKDIR /my_app
RUN bundle install
COPY . /my_app
we tried:
RUN cp ./config/database_docker_2.yml /my_app/config/database.yml
but the file does not seem to be copied, the default version of database.yml is used when we spin up the container.
we then tried:
COPY ./config/database_docker_2.yml /my_app/config/database.yml
the file still does not seem to be copied, the default version of the file gets used when we spin up the container.
What DOES work is adding another entry to the volume section of docker-compose.yml specifically for that one file:
volumes:
- .:/my_app
- ./config/database_docker_2.yml:/my_app/config/database.yml
but we prefer to manage the placement of env-specific versions of files in the Dockerfile (as opposed to littering the docker-compose.yml with such env-specific files)
The command COPY ./config/database_docker_2.yml /my_app/config/database.yml probably works, there is no reason it shouldn't assuming the source exists.
What I suspect happens, is that when you are testing it, you already have a volume with .:/my_app, which then shows you the local folder, and not the in-container folder.
Run it without the volume, and I believe you will in fact see that it copied it into the container, as you intended.
On a side note:
If you are not yet locked in your way of handling this multiple database config, I would consider re-evaluating your situation, and trying to find a solution that does not require you to change database.yml for each environment. One way, would be to have the database.yml use an environment variable (usually DATABASE_URL) and then you have one docker-compose for all, and one database.yml for all, and you only configure environment with environment variables.

Security of a Docker image

I am considering to package a Rust application into a Docker container.
The current version of that application contains various credential files used to register to Discord API or Google API through a service account key.
Would these files be accessibles if I package my application as such?
[EDIT: added Dockerfile]
FROM rust:1.28.0
WORKDIR /usr/src/<application>
COPY . .
RUN cargo install --force --path .
CMD ["<application>"]
Never put actual credentials into anything that might not be accessed by you and only you.
You basically have two options:
1) Have your application pull the required credentials from its environment, then set these variables when you start the container. see docs
2) Have your application read the credentials from a config file, that doesn't get pulled into the docker image. Then, when running the container, mount that file into it, see docs
You could actually do both: Have an environment variable that tells your application whether it should look for a config file ( maybe in production) and if that variable is unset, check the environment (for development).
Edit: It's best practice to create a .dockerignore File in your build-context, containing the name (or path) of the file holding the credentials.

docker-compose caches run results

I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker

Resources