Windows Registry Variables vs. Environment Variables? - environment-variables

At first glance this seems like a purely subjective/aesthetic issue, but I'd be interested to hear opinions (especially any technical ones) on whether environment variables or the registry is the preferred place for storing configuration data in a Windows environment.
I can currently only think of the following differences:
Registry settings are persistent across sessions, though I believe that environment variables can also have this property.
It's easier to set environment variables from the command-line vs. using regedit
(Counter-argument: regedit easier for non-command-line apps?)
Environment variables are more common across platforms (?).
I'm also aware that environment variables can be interrogated, modified and set from the registry.

Use environment variables when you intend to be configured by other applications (or by a technical user) and that this configuration could be different (i.e. you have 2 instances running at the same time, with different settings). Cluttering a user's environment isn't usually necessary. In most cases, use the registry, or a config file stored in $HOME\AppData\Roaming\YourApp.

When using windows services, environment variables can be a pain: Just changing the variable and then restarting the service will not help. Usually the system needs to be restarted.
If the service looks up settings in the registry, this is much easier.
I saw this behavior on Windows XP, I'm not sure if the later versions have resolved this issue.

Related

Do I have to rebuild my Docker image every time I want to make a change to app settings?

We distribute Docker images of our .Net Core Web API to clients.
By setting the ASPNETCORE_ENVIRONMENT environment variable to X in the client's Kubernetes Helm Chart, the correct environment settings in appsettings.X.json get picked up. This all works nicely.
But what happens if the client needs to change one of the settings in appsettings.X.json? We don't want them to rebuild the Docker image.
Can someone offer a better architecture here?
The most common practice is to get settings directly from the environment. Thus instead of a settings.json you would read from the environment (you could have defaults too. Another solution would be to use http://www.confd.io/

How to set mode and time in Dynamic Agents?

I am referring to this page:
https://www.instana.com/docs/setup_and_manage/host_agent/updates/#update-interval
Is there a way to pass mode and time from outside as environment variables or any other way beside logging into the pod and manually changing the files inside etc/instana/com.instana.agent.main.config.UpdateManager.cfg file?
To whoever removed his/her answer: It was a correct answer. I don't know why you deleted it. Anyhow, I am posting again in case someone stumbles here.
You can control frequency and time by using INSTANA_AGENT_UPDATES_FREQUENCY and INSTANA_AGENT_UPDATES_TIME environment variables.
Updating mode via env variable is still unknown at this point.
Look at this page for more info: https://www.instana.com/docs/setup_and_manage/host_agent/on/docker/#updates-and-version-pinning
Most agent settings that one may want to change quickly are available as environment variables, see https://www.instana.com/docs/setup_and_manage/host_agent/on/docker. For example, setting the mode via environment variable is supported as well with INSTANA_AGENT_MODE, see e.g., https://hub.docker.com/r/instana/agent. The valid values are:
APM: the default, the agent monitors everything
INFRASTRUCTURE: the agent will collect metrics and entities but not traces
OFF: agent runs but collects no telemetry
AWS: agent will collect data about AWS managed services in a region and an account, supported on EC2 and Fargate, and with some extra configurations, on hosts outside AWS
On Kubernetes, it is also of course possible to use a ConfigMap to override files in the agent container.

How do I configure two sets of hosts (3 for QA and 3 for Prod) for deploying a distributed system using Spinnaker?

I am using Spinnaker to deploy a 3-tier system to QA and then to Production. Configuration files in each of these systems point to others. If I bake in the configuration for QA in the AMI, then how do I change it while promoting to Prod? Is it 1) by having two different sets of AMIs - one for QA and one for Prod, or, 2) by having the AMIs with no configuration and then configure it (somehow) after deployment to change the configuration files?
What is recommended?
You can define custom AWS user data for cluster at deploy time ( under advanced settings of the cluster configuration ). You can then retrieve this user data in your application. This will allow you to change these type of configurations.
At Netflix, we have a series of init scripts that are baked into the base image and provide a mechanism for extending custom startup ( init.d ) scripts via nebula / gradle. This usually sets values like NETFLIX_ENVIRONMENT that are well known and programmed against.
We also use a feature flipping mechanism via https://github.com/Netflix/archaius . This allows us to add properties that are external to the clusters but can be targeted towards them.
When it comes to secured credentials, the approach is outlined in this presentation, but essentially the images reach out to an external service that issues these type of creds. https://speakerdeck.com/bdpayne/key-management-in-aws-how-netflix-secures-sensitive-data-without-its-own-data-center
I am struggling with similar problems myself in our company.
My solution was to create AMIs for specific purposes using a Packer script. This allows me to -
1. Configure the server as much as I can and then store those configurations in an AMI.
2. Easily change these configurations if the need arises.
Then, launching the AMI using an Ansible script, and make all the rest of the configurations on the specific instance.
In my case I chose creating different images for staging and production, but mostly because they differ greatly. If they were more alike I might have chosen using a single AMI for both.
The advantage Ansible gives you here is factoring your configurations, and including written once to both production and staging servers.

Configuration of dockerized applications

How do you deal with host-specific configuration for docker containers?
Most importantly production passwords. They can't be put to container for security reasons. Also, they need to be changed on regular basis w/o redeploying anything.
I mostly use Volume containers if I really want to avoid making an image for that: http://docs.docker.com/userguide/dockervolumes/
however, I have done things like
FROM myapp
ADD password.file /etc
RUN do some cfg
to make specific images with password baked in.
That way the specific configuration list passwords is not in the general images which are re-used and known, but they are in an image that is build for and on the host that is running it. (this approach is not really very different to using puppet/chef etc to customise the files on a vm/server - it eventually needs to hit the filesystem in a safe, but repeatable way)
As one comment says, use environment variables you can pass via -e pass=abcdef. You can also save them in a fig.yml config file to avoid having to type them every time.

Windows Service running in multiple instances - best practice?

I have a project where a number of 'environments' are running simultaniously: Local development environment (VS), Dev, Test and Prod.
We now wish to expand the program suite with a 'server application' to process background assignments as calculations and mail sending.
I'm trying to find best practice for this situation.
I'm thinking that it should be a windows service.
As a result, I need to have three copies of the service running (Dev, Test and Prod) and preferable on the single server assigned as our application server. I'm thinking I can copy relevant exe to separate directories and 'somehow' instruct each service which environment it is supposed to connect to.
It's important to notice that the three services would not nessesarily be running the same release of the code.
What is the best practice for doing this?
Any input appreciated!
Anders, Denmark
Definitely sounds like Windows Services would be the right call. These services would be daemons, running independently from each other.
I recommend against creating 3 executables. Stick with just one as it is easier for deployment.
Have your exe take a command line parameter telling it which environment it should run, and fire off the appropriate part of the code.
Its then pretty easy to start, stop and query your services.
Let me know your thoughts!
Use an application config file for each installed instance of the service executable so you can set the environment to run against.

Resources