Docker Swarm Secrets + Portainer - docker-swarm

I run my Node-Red on Docker Swarm + Portainer.
I want to define several credentials, e.g some for development, some for test and some for production, so that I can create 3 containers, one for each environment and use the relevant credentials for each of them.
I'm a bit confused regarding the right way to choose - From what I understand, I can use Docker Swarm Secrets, but then I don't know how to access them from the Node-Red editor; or I can use the "credentialsSecret" in settings.js - but I'm not sure if it's suitable for multiple credentials.
Can someone assist?
Thanks in advance!

Assuming that the credentials are passed into the containers as Environment Variables then they can be included in the nodes configuration as follows:
Any node property can be set with an environment variable by setting
its value to a string of the form ${ENV_VAR}. When the runtime loads
the flows, it will substitute the value of that environment variable
before passing it to the node.
This only works if it replaces the entire property - it cannot be used
to substitute just part of the value. For example, it is not possible
to use CLIENT-${HOST}.
E.g. if you have a environment variable called MQTT_PASSWORD, that holds the password to use when connecting to your MQTT broker, you would enter ${MQTT_PASSWORD} into the password field for the MQTT broker config node.
This will be populated when the Node-RED runtime loads the flow.
You can read more in the Node-RED documentation here

Related

In container built with quarkus, trying to optionally enable OIDC integration with keycloak on docker container start

i would like to provide our container with an optional OIDC/keycloak integration, disabled by default but possible to enable when starting a container via env variables.
This is how the configuration looks like in application.properties at build time:
quarkus.oidc.enabled=false
# quarkus.oidc.auth-server-url=<auth-server-url>
# quarkus.oidc.client-id=<client-id>
# quarkus.oidc.credentials.secret=<secret>
Ideally, on container start, quarkus.oidc.enabled=true could be set along side the other three properties via container env variables.
However, quarkus won't allow this, as quarkus.oidc.enabled can only be set on build time apparently, but not overridden at runtime (https://quarkus.io/guides/security-openid-connect#configuring-the-application).
I have found a google group that picks up on this topic (https://groups.google.com/g/quarkus-dev/c/isGqZvY829g/m/BNerQvSRAQAJ), mentioning the use of quarkus.oidc.tenant-enabled=false instead, but i am not sure how to apply this strategy in my use case.
Can anyone help me out here on how to make this work without having to build two images (one with oidc enabled, and one without) ?

Getting AWS parameter store secrets inside ecs-cli compose docker container

I have been building a backend for the last few days that I am launching with docker-compose. I use docker secrets to not have to store passwords - for example for the database - in an environment variable.
Since I want to use AWS ECS to run the docker containers online, and unfortunately docker compose is not supported the way I want, I'm trying to rewrite the whole thing into an ECS-compose file. However, I am still stuck on the secrets. I would like to include them like this:
version: 1
task_definition:
...
services:
my-service:
...
secrets:
- value_from: DB_USERNAME
name: DB_USERNAME
- value_from: DB_PASSWORD
name: DB_PASSWORD
By doing this, the secrets get saved inside environment variables, aren't they? This is not best practice, or is this case different than other cases?
Can I access these variables without problems inside the container by getting the environment variables?
I hope I have made my question clear enough, if not, please just ask again.
Thanks for the help in advance.
It is not best practise to store sensitive information within environment variables. There is an option within AWS ECS where you can configure the environment variables and get the values of those variables from AWS Secrets Manager. This way, the environment variables are only resolved within the container at run time.
But this still means that the container is going to store the variables as environment variables.
I have faced a similar situation while deploying apps onto EKS. I have setup a central vault server for secrets management within AWS and configured my application to directly call the vault endpoint to get the secrets. I had to complicate my architecture as I had to meet PCI compliance standards. If you are not keen on using vault due to its complexity, you can try knox-app (https://knox-app.com/) which is an online secrets management tool built by lyft engineers.
And to answer your second part of the question - yep. If you set the env variables, you will be able to access them within the container without any problem.

LXC environment variables

I'm new to LXC containers and am using LXC v2.0. I want to pass settings to the processes running inside my container (specifically command line parameters for their Systemd service files.
I'm thinking of passing environment variables to the container via the config file lxc.environment = ABC=DEF . (I intend to use SALT Stack to manipulate these variables). Do I manually have to parse /proc/1/environ to access these variables or is there a better way I'm missing?
The documentation says:
If you want to pass environment variables into the container (that is, environment variables which will be available to init and all of its descendents), you can use lxc.environment parameters to do so.
I would assume that, since all processes - including the shell - are descendents of the init process, the environment should be available in every shell. Unfortunately, this seems not to be true. In a discussion on linuxcontainers.org, someone states:
That’s not how this works unfortunately. Those environment variables are passed to anything you lxc exec and is passed to the container’s init system.
Unfortunately init systems usually don’t care much for those environment variables and never propagate them to their children, meaning that they’re effectively just present in lxc exec sessions or to scripts which directly look at PID 1’s environment.
So yes, obviously parsing /proc/1/environ seems to be the only possibility here.

Put applications's public URL in its Docker Compose environment

I have a Python API that has to know its public address to properly create links to itself (needed when doing paging and other HATEOAS stuff) in the responses it creates. The address is given to the application as an environment variable.
In production it's handled by Terraform, but I also have extensive local tests that make use of Docker Compose. In tests for paging I need to be aware of the fact that I'm running locally and I need to replace the placeholder address I'm putting in the app's env with http://localhost:<apps_bound_port> for following the links.
I don't want to do that. I'd like to have a way to put the port assigned by Docker in the app's environment variables. The problem wouldn't be there if I was using fixed ports (then I could just put something like http://localhost:8000 in the public addres variable), because I can have multiple instances of Compose running, which wouldn't work then.
I know I can pass environment variables from the shell running docker-compose to the containers, but I don't know of a way to insert the generated port using this approach.
Only solution that I have for my problem now is to find a free port before Compose runs, and then pass it as an environment variable (API_PORT=<FREE_PORT> docker-compose up), while setting up the port like this in docker-compose.yml:
ports:
- "8000:${API_PORT}"
This isn't ideal, because I run Compose both from the shell (with make) and from Python tests, so I'd need to put the logic for getting the port into an env variable in both places.
Is there something I'm missing, or should I create a feature request for Docker Compose?

Why should I use Docker Secrets?

For the last few months I've managed passwords for my docker containers by putting them in the ENV variables.
Example:
web:
environment:
- PASSWORD=123456
Then I bumped into Docker Secrets. So my questions are:
Which are the reasons why I should use them?
Are they more secure? How?
Can you provide a simple example to show their functionalities?
It depends on a use case.
If you're running one application on your own machine for development that accesses just one secret, you don't need docker secrets.
If you're running dozens of machines in production with a dozen of clustered services all requiring secrets for each other, you do need the secret management.
Apart from security concern, it's just plain easier to have a standardized way of accessing, creating and removing your secrets.
a basic docker inspect (among other things) will show all your environment variables, so this is not secure at all.
You can also have a look at
keywhiz
square.github.io/keywhiz
or vault
hashicorp.com/blog/vault.html
From: https://github.com/moby/moby/issues/13490
Environment Variables. Environment variables are discouraged, because they are:
Accessible by any proces in the container, thus easily "leaked"
Preserved in intermediate layers of an image, and visible in docker inspect
Shared with any container linked to the container

Resources