Well, I am wondering if it's possible creating the queue on container startup but without the config file, because at work we use an internal tool for CI, and can just set some environment variables and we are forced to rewrite the entrypoint command in the CI config file. The reason of that is that the CI config file does NOT have access to the CI workspace and its environment variables or files like a possible elasticmq-custom.conf, so it wouldn't be possible using that.
The CI config file is like this:
schemaVersion: 2.0
image: docker://docker.io/softwaremill/elasticmq-native
host: QUEUE_URL
ports:
- name: QUEUE_PORT
default: 9325
commands:
# here I would set some environment variables that can be accessed by the new start command
- /sbin/tini -- /opt/docker/bin/elasticmq-native-server -Dconfig.file=/opt/elasticmq.conf -Dlogback.configurationFile=/opt/logback.xml
The goal would be creating the queue using the commands above, any idea?
Related
I am running docker-ejabberd on ECS and all works fine. Now i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container. There is no clear way described even on the docker-ejabberd wiki or anywhere on how to do that simply. Does anyone face a similar situation and how to do that?
For example in the ejabberd.yml i have this section:
sql_server: ${MYSQL_SERVER}
sql_database: ${MYSQL_DATABASE_NAME}
sql_username: ${MYSQL_USERNAME}
sql_password: ${MYSQL_PASSWORD}
sql_port: ${MYSQL_PORT}
I want to pass those vars as env vars while docker run and then replace them before the container run.
Side note: We are using ECS and passing the variables through the task defination without any issue.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Also, I have an idea of replacing the variables in this ejabberd.yml file in the CICD pipeline just before building the image and while getting the code from the git repository and create the image on AWS ECR?
i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container.
The ejabberd.yml file is read and parsed by the yconf library (https://github.com/processone/yconf) , and I doubt it supports such a thing.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Following that recomendation, if you don't want to mess with the whole ejabberd.yml and let a script manipulate it, you can ensure that only those specific options are parametrized:
You can define those vars using a script in a small file, and then include options from that small file into ejabberd.yml using
https://docs.ejabberd.im/admin/configuration/file-format/#include-additional-files
For example, in your ejabberd.yml, put something like this:
include_config_file:
/etc/ejabberd/database.yml:
allow_only: [sql_server, sql_database, sql_username, sql_password, sql_port]
Then write your script, that generates that small file, for example:
$ generate-database-config.sh
$ cat /etc/ejabberd/database.yml
sql_server: "localhost"
sql_database: "ejaup"
sql_username: "ejabberd_test"
sql_password: "ejabberd_test"
sql_port: 3306
I am currently trying to build an NGINX Docker container that will be running alongside a Jupyter container. Within Jupyter, there is a download capability that I wish to disable or enable during the NGINX container build process.
Currently, I am passing a build argument in through the Dockerfile that will be read into the nginx.conf file as an environment variable. However, it seems as though the location directive that controls download within Jupyter cannot be placed within a conditional. If I understand correctly, the location directive must be under the server directive at all times.
env DOWNLOAD;
...
http {
...
server {
...
if (DOWNLOAD = 'true') {
location / {
...
}
}
}
When I attempt to build the container with the configuration above, I run into this error:
"location" directive is not allowed here..."
My question is - if conditionals are tricky to have functioning correctly in a NGINX conf file, are there are any approaches to controlling a location directive within the NGINX conf file provided an environment variable?
Thanks in advance.
The approach I use:
Create nginx-entry.sh file which would resolve all configuration variables of nginx
Inject this nginx-entry.sh file into nginx container
Switch entrypoint of this nginx container to nginx-entry.sh file
Working sample in my toy project:
Dockerfile - https://github.com/taleodor/mafia-vue/blob/master/Dockerfile
Nginx config - https://github.com/taleodor/mafia-vue/tree/master/nginx
Using this technique you can tweak / template configuration the way you need it.
While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)
I have got a docker-compose.yml file and there I define:
extra_hosts:
- "localhost:${MY_MACHINE_IP}"
It works if I define MY_MACHINE_IP as environment var earlier.
What I want to achieve is to perform action like:
extra_hosts:
- "localhost:<get MY_MACHINE_IP from env if it exists, if not set MY_MACHINE_IP env variable with value <docker-machine-ip>>"
In other words: I want to define it in extra_hosts section, if MY_MACHINE_IP is already specified, get it, if not - set this env. variable with value = my docker machine ip.
Is it possible?
Yes in according to docker documentation
docker-compose run SERVICE env
So i think the variables are not global as you may think. You have to pass them as parameters.
Read this.
You can use the package ruamel.dcw for that (dcw for Docker Compose Wrapper, disclaimer: I am the author of that package). It allows you to create a section with key user-data in your docker-compose.yaml file, which is stripped out before handing the file to the normal docker-compose. That section can look like:
user-data:
author: Your Name <your-name#youremail.com>
description: container for postfix/submission
env-defaults:
PORT: 587 # override during development
NAME: submission
DOCKER_BASE: /data0/DATA
and then you can use {PORT}, {NAME} and {DOCKER_BASE} in the rest of the file, with the option of overriding these default values with environment variables.
The utility also write out a file .dcw_env_vars.inc which you can copy into your container and source to get the appropriate values into scripts you RUN from within the Dockerfile
So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.