Set up admin password from environment variable - influxdb

I have deployed influxdb2 as a statefulset in my k8s cluster.
I have set environment variables as follow :
DOCKER_INFLUXDB_INIT_MODE=setup
DOCKER_INFLUXDB_INIT_USERNAME=admin
DOCKER_INFLUXDB_INIT_PASSWORD=Adm1nPa$$w0rd
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=Adm1nT0k3n
First time I run my manifest, it works just fine and I can log in to the GUI using the provided secrets.
Now I want to rotate those secrets, so I change those variables and redeploy my statefulset and find this error :
2022-06-15T11:35:46. info found existing boltdb file, skipping setup wrapper {"system": "docker", "bolt_path": "/var/lib/influxdb2/influxd.bolt"}
Indeed, if I log into my pod I can browse into /var/lib/influxdb2/influxd.bolt and find the previous admin's secrets value : Adm1nT0k3n and Adm1nPa$$w0rd.
How can I force influxdb2 to use the new environment variables DOCKER_INFLUXDB_INIT_PASSWORD and DOCKER_INFLUXDB_INIT_TOKEN ?

Related

Using GitHub Codespaces secrets in devcontainer.json

Problem
Some library I use requires the case sensitive environment variable QXToken.
When I create a codespaces secret the environment variable is only available in uppercase (QXTOKEN), as the secrets are case insensitive. Therefore I want to copy the secret stored in QXTOKEN to the environment variable QXToken.
I tried to do that in the devcontainer.json:
{
...
"remoteEnv": {
"QXAuthURL": "https://auth.quantum-computing.ibm.com/api",
"QXToken": "${secrets.QXTOKEN}"
},
"updateContentCommand": "env; export QXToken=$QXTOKEN; env",
"postCreateCommand": "env; export QXToken=$QXTOKEN; env",
"postStartCommand": "env; export QXToken=$QXTOKEN; env",
"postAttachCommand": "env; export QXToken=$QXTOKEN; env"
}
But remoteEnv cannot access the codespaces secrets via ${secrets.QXTOKEN} as one would be able to with GitHub Actions and none of updateContentCommand, postCreateCommand, postStartCommand and postAttachCommand saved the environment variable persistently for the user.
Using the command env I see from the logs that the environment variables have been set, but already in the next command they are gone.
Even though the postCreateCommand is able to access the codespaces secrets according to the documentation I was not able to set environment variables for later usage.
For now I only see the following environment variables, but I am missing QXToken:
$ env | grep QX
QXAuthURL=https://auth.quantum-computing.ibm.com/api
QXTOKEN=***
Question
Is there a best practice to reuse codespaces secrets inside devcontainer.json and make them available as environment variables in the codespace?
The GitHub Codespaces secrets are available via localEnv which is a special variable used by devcontainer.json which provides access to environment variables on the host machine. Therefore, you can set the environment variable QXToken with ${localEnv:QXTOKEN} inside devcontainer.json.
Furthermore, if you want to set an environment variable pointing to a path inside your repo you can use ${containerWorkspaceFolder}/path/inside/your/repo.
"remoteEnv": {
// Use a GitHub Codespaces secret:
"QXToken": "${localEnv:QXTOKEN}",
// Point to a path inside your repo:
"QISKIT_SETTINGS": "${containerWorkspaceFolder}/.qiskit/settings.conf"
}
For more details on the available variables in devcontainer.json have a look at the documentation.

Jenkins X use secrets in Preview environments

I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart

Nextcloud trusted domain with auto configuration via environment variables

When i'm configure nextcloud (which run in docker container ) using environment variables , i can't visit site after it and i need to configure manually with connection to docker with bash .
How to solve this problem or make it automatically without creating my own docker image?
The environment variable only gets picked up and applied to the config when building a brand new instance. If you've already created a config.php file which is mapped in that volume, that environment variable will not override it.
If you want to keep your existing config intact, you need to SSH into your NAS and go to your Nextcloud Docker folder and find /config/config.php. For me this was located at: /docker/nextcloud/config/www/nextcloud/config
Then type: sudo nano config.php
Quick vi refresher - i to insert, esc to exit edit mode, and :qw to quit write mode but in this instance you may need to use :qw!
And add a new domain just add new entries by appending a new item to the PHP array:
'trusted_domains' =>
array (
0 => '192.168.0.29',
1 => 'cloud.example.com',
),
Reference: https://help.nextcloud.com/t/howto-add-a-new-trusted-domain/26
That sounds like an issue with Trusted Domains.
If you have a look at their repository (readme) at https://github.com/nextcloud/docker you will see an environment variabled called NEXTCLOUD_TRUSTED_DOMAINS which you can set in your docker environment.
Alternatively, you will find it in the {app}/config/config.php
The default values set for it, in my experience, is only 'localhost' to enable connecting to NextCloud from localhost at the very least.
Hope this helps.

How to debug an Elixir application in production?

This is not particularly about my current problem, but more like in general. Sometimes I have a problem that only happens in production configuration, and I'd like to debug it there. What is the best way to approach that in Elixir? Production runs without a graphical environment (docker).
In dev I can use IEX.pry, but since mix is unavailable in production, that does not seem to be an option.
For Erlang https://stackoverflow.com/a/21413344/1561489 mentions dbg and redbug, but even if they can be used, I would need help on applying them to Elixir code.
First, start a local node running iex on your dev machine using iex -S mix. If you don't want the application that's running locally to cause breakpoints to be activated, you need to disable the app from starting locally. To do this, you can simply comment out the application function in mix.exs or run iex -S mix run --no-start.
Next, you need to connect to the remote node running on docker from iex on your dev node using Node.connect(:"remote#hostname"). In order to do this, you have to make sure both the epmd and the node ports on the remote machine are reachable from your local node.
Finally, once your nodes are connected, from the local iex, run :debugger.start() which opens the debugger with the GUI. Now in the local iex, run :int.ni(<Module you want to debug>) and it will make the module visible to the debugger and you can go ahead and add breakpoints and start debugging.
You can find a tutorial with steps and screenshots here.
In the case that you are running your production on AWS, then you should first and foremost leverage CloudWatch to your advantage.
In your elixir code, configure your logger like this:
config :logger,
handle_otp_reports: true,
handle_sasl_reports: true,
metadata: [:application, :module, :function, :file, :line]
config :logger,
backends: [
{LoggerFileBackend, :shared_error}
]
config :logger, :shared_error,
path: "#{logging_dir}/verbose-error.log",
level: :error
Inside your Dockerfile, configure an environment variable for where exactly erl_crash.dump gets written to, such as:
ERL_CRASH_DUMP=/opt/log/erl_crash.dump
Then configure awslogs inside a .config file under .ebextensions as follows:
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[erl_crash.dump]
log_group_name=/aws/elasticbeanstalk/your_app/erl_crash.dump
log_stream_name={instance_id}
file=/var/log/erl_crash.dump
[verbose-error.log]
log_group_name=/aws/elasticbeanstalk/your_app/verbose-error.log
log_stream_name={instance_id}
file=/var/log/verbose-error.log
And ensure that you set a volume to your docker under Dockerrun.aws.json
"Logging": "/var/log",
"Volumes": [
{
"HostDirectory": "/var/log",
"ContainerDirectory": "/opt/log"
}
],
After that, you can inspect your error messages under CloudWatch.
Now, if you are using ElasticBeanstalk(which my example above implicitly implies) with Docker deployment as opposed to AWS ECS, then the logs of std_input are redirected by default to /var/log/eb-docker/containers/eb-current-app/stdouterr.log inside CloudWatch.
The main purpose of erl_crash.dump is to at least know when your application crashed, thereby taking the container down. AWS EB will normally restart the container, thus keeping you ignorant about the restart. This understanding can also be obtained from other docker related logs, and you can configure alarms to listen for them and be notified accordingly when your docker had to restart. But another advantage of logging erl_crash.dump to CloudWatch is that if need be, you can always export it later to S3, download the file and import it inside :observer to do analysis of what went wrong.
If after consulting the logs, you still require a more intimate interaction with your production application, then you need to leverage remsh to your node. If you use distillery, you would configure the cookie and the node name of your production application with your release like this:
inside rel/confix.exs, set cookie:
environment :prod do
set include_erts: false
set include_src: false
set cookie: :"my_cookie"
end
and under rel/templates/vm.args.eex you set variables:
-name <%= node_name %>
-setcookie <%= release.profile.cookie %>
and inside rel/config.exs, you set release like this:
release :my_app do
set version: "0.1.0"
set overlays: [
{:template, "rel/templates/vm.args.eex", "releases/<%= release_version %>/vm.args"}
]
set overlay_vars: [
node_name: "p#127.0.0.1",
]
Then you can directly connect to your production node running inside docker by first ssh-ing inside the EC2-instance that houses the docker container, and run the following:
CONTAINER_ID=$(sudo docker ps --format '{{.ID}}')
sudo docker exec -it $CONTAINER_ID bash -c "iex --name q#127.0.0.1 --cookie my_cookie"
Once inside, you can then try to poke around or if need be, at your own peril inject modified code dynamically of the module you would like to inspect. An easy way to do that would be to create a file inside the container and to invoke a Node.spawn_link target_node, fn Code.eval_file(file_name, path) end
In the case your production node is already running and you do not know the cookie, you can go inside your running container and do a ps aux > t.log and do a cat t.log to figure out what random cookie has been applied and use accordingly.
Docker serves as an impediment to the way epmd is able to communicate with other nodes. The best therefore would be to rather create your own AWS AMI image using Packer and do bare metal deployments instead.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I have not tried it out as yet, I may be wrong.
In the case that you are running on a provider other than AWS, then figuring out how to get easy access to your remote logs with some SSM agent or some other service is a must.
I would recommend using some sort of exception handling tools, so far I am having great experiences on Sentry.

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

Resources