Custom Environment Variables for Kafka Connect via Docker - docker

Is there a way to provide custom variables via Docker-Compose that can be referenced within a Kafka Connector config?
I have the following setup in my docker-compose.yml:
- "sql_server=1.2.3.4"
- "sql_database=db_name"
- "sql_username=some_user"
- "sql_password=nahman"
- "sql_applicationname=kafka_connect"
Here is my .json configuration file:
{
"name": "vwInv_Tran_Amounts",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"src.consumer.interceptor.classes": "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor",
"tasks.max": 2,
"connection.url": "jdbc:sqlserver://${sql_server};database=${sql_database};user=${sql_username};password={sql_password};applicationname={sql_applicationname}",
"query": "SELECT * FROM vwInv_Tran_Amounts",
"mode": "timestamp",
"topic.prefix": "inv_tran_amounts",
"timestamp.column.name": "timestamp",
"incrementing.column.name": "Inv_Tran_ID"
}
}
I was able to reference the environment variables using this method with Elastic Logstash, but it doesn't appear to work here.
Whenever loading it via curl I receive:
The connection string contains a badly formed name or value. for configuration Couldn't open connection to jdbc:sqlserver://${sql_server};database=${sql_database};user=${sql_username};password={sql_password};applicationname={sql_applicationname}\nInvalid value com.microsoft.sqlserver.jdbc.SQLServerException: The connection string contains a badly formed name or value.
EDIT/////////
I tried prefixing environment varibles like CONNECT_SQL_SERVER and that didn't work.

I feel like you are looking for Externalizing Kafka Connect secrets, but that would require mounting a file, not using env vars.
JSON Connector config files aren't loaded on Docker container startup. I made this issue to see if this would be possible.
You would have to template out the JSON file externally, then HTTP-POST them to the port exposed by the container.
Tried prefixing environment varibles like CONNECT_SQL_SERVER
Those values would go into the Kafka Connect Worker properties, not the properties that need to be loaded by a specific connector task.

Related

Adding docker run flags to ECS operator in airflow

I'm using ECSOperator in airflow and I need to pass flags to the docker run. I searched the internet but I couldn't find a way to give an ECSOperator flags such as: -D, --cpus and more.
Is there a way to pass these flags to a docker run (if a certain condition is true) using the ECSOperator (same way we can pass tags, and network configuration), or they can only be defined in the ECS container running the docker image?
I'm not familiar with ECSOpearor but if I understand correctly that is python library. And you can create new task using python
As I can see in this exmaple it is possible to set task_definition and overrides:
...
ecs_operator_task = ECSOperator(
task_id = "ecs_operator_task",
dag=dag,
cluster=CLUSTER_NAME,
task_definition=service['services'][0]['taskDefinition'],
launch_type=LAUNCH_TYPE,
overrides={
"containerOverrides":[
{
"name":CONTAINER_NAME,
"command":["ls", "-l", "/"],
},
],
},
network_configuration=service['services'][0]['networkConfiguration'],
awslogs_group="mwaa-ecs-zero",
awslogs_stream_prefix=f"ecs/{CONTAINER_NAME}",
...
So if you want to set CPU and Memory specs for whole task you have to update task_definition dictionary parameters (something like service['services'][0]['taskDefinition']['cpu'] = 2048)
If you want to specify parameters for exact container, overrides should be proper way:
overrides={
"containerOverrides":[
{
"cpu": 2048,
...
},
],
},
Or edited containerDefinitions may be set directly inside task_definition in theory...
Anyway most of docker parameters should be pass inside containerDefinitions section.
So about your question:
Is there a way to pass these flags to a docker run
If I understand correctly you have a JSON TaskDefinition file and want to run it locally using docker?
Then try to check these tools. It allows you to convert docker-compose.yml into ECS definition, and that is opposite of what you looking for, but maybe some of these tools able to convert it vice-versa..?
In other way you have to parse TaskDefinition's JSON manually and convert it to docker command arguments

How to add a custom environment variables to docker-ejabberd

I am running docker-ejabberd on ECS and all works fine. Now i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container. There is no clear way described even on the docker-ejabberd wiki or anywhere on how to do that simply. Does anyone face a similar situation and how to do that?
For example in the ejabberd.yml i have this section:
sql_server: ${MYSQL_SERVER}
sql_database: ${MYSQL_DATABASE_NAME}
sql_username: ${MYSQL_USERNAME}
sql_password: ${MYSQL_PASSWORD}
sql_port: ${MYSQL_PORT}
I want to pass those vars as env vars while docker run and then replace them before the container run.
Side note: We are using ECS and passing the variables through the task defination without any issue.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Also, I have an idea of replacing the variables in this ejabberd.yml file in the CICD pipeline just before building the image and while getting the code from the git repository and create the image on AWS ECR?
i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container.
The ejabberd.yml file is read and parsed by the yconf library (https://github.com/processone/yconf) , and I doubt it supports such a thing.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Following that recomendation, if you don't want to mess with the whole ejabberd.yml and let a script manipulate it, you can ensure that only those specific options are parametrized:
You can define those vars using a script in a small file, and then include options from that small file into ejabberd.yml using
https://docs.ejabberd.im/admin/configuration/file-format/#include-additional-files
For example, in your ejabberd.yml, put something like this:
include_config_file:
/etc/ejabberd/database.yml:
allow_only: [sql_server, sql_database, sql_username, sql_password, sql_port]
Then write your script, that generates that small file, for example:
$ generate-database-config.sh
$ cat /etc/ejabberd/database.yml
sql_server: "localhost"
sql_database: "ejaup"
sql_username: "ejabberd_test"
sql_password: "ejabberd_test"
sql_port: 3306

How can I set multiple env variables pointing to the same value on docker?

I have several containers that I run together with docker-compose.
One of them, is mysql, which requires some variables to be set. I have those in a .env file:
MYSQL_USER='my_user'
MYSQL_PASSWORD='my_password'
MYSQL_ROOT_PASSWORD='supersecretpassword'
MYSQL_DATABASE='my_database'
And I am able to start the mysql container successfully.
The problem comes when I want to use another service for db migrations, which require the following variables set in the .env file:
SERVICE_DBUSER='my_user'
SERVICE_DBPASSWORD='my_password'
SERVICE_DBNAME='my_database'
And what I would like to write (this doesn't work), to avoid repetition, is something like:
SERVICE_DBUSER="$MYSQL_USER"
SERVICE_DBPASSWORD="$MYSQL_PASSWORD"
SERVICE_DBNAME="$MYSQL_DATABASE"
But docker doesn't recognize that and doesn't perform the substitution. In the docker docs, it also states that expects each line in an env file to be in VAR=VAL format.
My question is, is it possible to avoid the repetition?
Many thanks.
Compose will substitute environment variables into the YAML compose file when you reference them with $VARIABLE or ${VARIABLE}.
You can still use the .env file to set a default environment. But when you want to reference a variable, put it in the environment: section of the compose yaml:
environment:
SERVICE_DBUSER: "${MYSQL_USER}"
SERVICE_DBPASSWORD: "${MYSQL_PASSWORD}"
SERVICE_DBNAME: "${MYSQL_DATABASE}"
Then if you set, or source an alternate environment when running docker-compose you will get the new values substituted in.
$ MYSQL_USER="other" MYSQL_PASSWORD="opass" docker-compose start

Get or set env variable in docker-compose.yml file

I have got a docker-compose.yml file and there I define:
extra_hosts:
- "localhost:${MY_MACHINE_IP}"
It works if I define MY_MACHINE_IP as environment var earlier.
What I want to achieve is to perform action like:
extra_hosts:
- "localhost:<get MY_MACHINE_IP from env if it exists, if not set MY_MACHINE_IP env variable with value <docker-machine-ip>>"
In other words: I want to define it in extra_hosts section, if MY_MACHINE_IP is already specified, get it, if not - set this env. variable with value = my docker machine ip.
Is it possible?
Yes in according to docker documentation
docker-compose run SERVICE env
So i think the variables are not global as you may think. You have to pass them as parameters.
Read this.
You can use the package ruamel.dcw for that (dcw for Docker Compose Wrapper, disclaimer: I am the author of that package). It allows you to create a section with key user-data in your docker-compose.yaml file, which is stripped out before handing the file to the normal docker-compose. That section can look like:
user-data:
author: Your Name <your-name#youremail.com>
description: container for postfix/submission
env-defaults:
PORT: 587 # override during development
NAME: submission
DOCKER_BASE: /data0/DATA
and then you can use {PORT}, {NAME} and {DOCKER_BASE} in the rest of the file, with the option of overriding these default values with environment variables.
The utility also write out a file .dcw_env_vars.inc which you can copy into your container and source to get the appropriate values into scripts you RUN from within the Dockerfile

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

Resources