Configure kapacitor CLI to communicate to Remote Kapacitor - influxdb

How to configure kapacitor CLI to communicate with remote kapacitor.

Set a KAPACITOR_URL env variable pointing to your host:port.
After this is set, just run kapacitor + commands to retrieve data from tasks, etc

It can be done even without env changes, e.g.
kapacitor -url http://someotherhost:9092 <do some>

Related

How can I parse a JSON object from an ENV variable and set a new environment variable in a dockerfile?

I'm trying to setup Ory Kratos on ECS.
Their documentation says that you can run migrations with the following command...
docker -e DSN="engine://username:password#host:port/dbname" run oryd/kratos:v0.10 migrate sql -e
I'm trying to recreate this for an ECS task and the Dockerfile so far looks like this...
# syntax=docker/dockerfile:1
FROM oryd/kratos:v0.10
COPY kratos /kratos
CMD ["-c", "/kratos/kratos.yml", "migrate", "sql", "-e", "--yes"]
It uses the base oryd/kratos:v0.10 image, copies across a directory with some config and runs the migration command.
What I'm missing is a way to construct the -e DSN="engine://username:password#host:port/dbname". I'm able to supply my database secret from AWS Secrets Manager directly to the ECS task, however the secret is a JSON object in a string containing the engine, username, password, host, port and dbname properties.
How can I securely construct the required DSN environment variable?
Please see the ECS documentation on injecting SecretsManager secrets. You can inject specific values from a JSON secret as individual environment variables. Search for "Example referencing a specific key within a secret" in the page I linked above. So the easiest way to accomplish this without adding a JSON parser tool to your docker image, and writing a shell script to parse the JSON inside the container, is to simply have ECS inject each specific value as a separate environment variable.

Storing default environment variables in Vault instead of env files in docker-compose for standard services

I have a docker-compose stack which uses standard software containers like:
InfluxDB
MariaDB
Node-Red
running on a Industrial Single Board Computer (which may not be connected to the internet)
for initial setup (bringing the stack up), I pass some standard credentials like admin credentials via their environment variable files e.g. influxdb.env, mariadb.env etc.
A typical example of a docker-compose.yml here is:
services:
influxdb:
image: influxdb:2.0
env_file:
- influxdb.env
nodered:
image: nodered/node-red:2.2.2
env_file:
- node-red.env
An example of influxdb.env could be:
INFLUXDB_ADMIN_USER=admin
INFLUXDB_ADMIN_PASSWORD=password!#$2
# other env vars that might be crucial for initial stack boot up
These files are on the disk and can still be vulnerable. I wish to understand if Hashicorp Vault can provide a plausible solution where such credentials (secrets) can be stored as key-value pairs and be made available to the docker-compose services upon runtime.
I understand one bottleneck that since I am using standard containers (ready-to-use) and they may not have vault integration. However, can I still use vault to store the env vars and let the services access them on runtime? Or do I have to write side-cars for these containers and then let them accept these env var values?
You have a few constraints to work with here:
Not storing secrets permanently in storage
docker-compose command line
Vault's output format
Docker composer can read it's environment variables from a file. I suggest that you create that file and provide it to docker-compose with the --env-file parameter.
I can think of two approach to write that file:
Write the output of multiple vault kv get to a file, in NAME=VALUE format
Use vault agent's template engine
The first option is quite straighforward. Call a function that outputs the secrets and send it to a file:
#!/bin/bash
function write_vault_secret_to_env_file() {
local ENVIRONMENT_VARIABLE_NAME=$1
local SECRET_PATH=$2
local SECRET_NAME=$3
echo "$ENVIRONMENT_VARIABLE_NAME=$(vault kv get --field $SECRET_NAME $SECRET_PATH)"
}
echo "$(write_vault_secret_to_env_file FIRST_ENVIROMENT_VAR secret/my-path/things first-secret)" >> my-env-file.sh
echo "$(write_vault_secret_to_env_file SECOND_ENVIROMENT_VAR secret/my-path/stuff second-secret)" >> my-env-file.sh
Vault agent 's template engine is much more powerfull, but is more complex to set up.
Another suggestion would be to use Vault's dynamic secrets for databases (InfluxDB is supported). But you need to provide Vault with DBA privileges in your database. If you create the database from scratch everytime, you could make the DBA password dba-root, give Vault that password and instruct it to rotate it for you.

How can I communicate with my services in my gitlab job?

I have the following gitlab job:
build-test-consumer-ui:
stage: build
services:
- name: postgres:11.2
alias: postgres
- name: my-repo/special-hasura
alias: graphql-engine
image: node:13.8.0-alpine
variables:
FF_NETWORK_PER_BUILD: 1
script:
- wget -q http://graphql-engine/v1/version
The docker image my-repo/special-hasura looks more or less like this:
FROM hasura/graphql-engine:v1.3.3.cli-migrations
ENV HASURA_GRAPHQL_DATABASE_URL="postgres://postgres:#postgres:/postgres"
ENV HASURA_GRAPHQL_ENABLED_LOG_TYPES="startup, http-log, webhook-log, websocket-log, query-log"
COPY ./my-migrations /hasura-migrations
EXPOSE 8080
When I run my gitlab job, then I see that my hasura instance initializes properly, i.e. it can connect to postres without any problem (the connection url HASURA_GRAPHQL_DATABASE_URL seems to be fine). However, I cannot access my hasura instance from my job's container, in the script section. The output of the command is
wget: bad address 'graphql-engine'
I suppose that the job's container is not located in the same network as the service containers. How can I communicate with the graphql-engine service from my job container? I am currently using gitlab-runner 13.2.4.
EDIT
Looking at the amount of answers to this question, I guess there is no easy way. Therefore I'll switch to docker-compose. Instead of using the services that I can theoretically define in my job, I'll use docker-compose in my job and that'll achieve exactly the same purpose.

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

Alias service environment var in a Docker container

I use Docker Compose to spin up my containers. I have a RethinkDB service container that exposes (amongst others) the host port in the following env var: APP_RETHINKDB_1_PORT_28015_TCP_ADDR.
However, my app must receive this host as an env var named RETHINKDB_HOST.
My question is: how can I alias the given env var to the desired one when starting the container (preferably in the most Dockerish way)? I tried:
env_file: .env
environment:
- RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR
but first, it doesn't work and second, it doesn't look as if it's the best way to go.
When one container is linked to another, it sets the environment variable, but also a host entry. For example,
ubuntu:
links:
rethinkdb:rethinkdb
will allow ubuntu to ping rethinkdb and have it resolve the IP address. This would allow you to set RETHINKDB_HOST=rethinkdb. This won't work if you are relying on that variable for the port, however, but that's the only thing I can think of besides adding a startup script or modifying your CMD.
If you want to modify your CMD, which is currently set to command: service rethink start, for example, just change it to prepend the variable assignment, e.g.
command: sh -c 'RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR && service rethink start'
The approach would be similar if you are using a startup script, you would just add that variable assignment as a line before the service starts
The environment variable name APP_RETHINKDB_1_PORT_28015_TCP_ADDR you are trying to use already contains the port number. It is already kind of "hard coded". I think you simply have to use this
environment:
- RETHINKDB_HOST=28015

Resources