I am deploying a stream on spring cloud dataflow version 2.1.2 on PCF. How do I pass "User Provided Environment Variables" to the deployed stream app.
Looks like all the properties I set are collapsed into the SPRING_APPLICATION_JSON. I have a requirement to keep them separate.
Thoughts ?
You can override this using a CF deployer property useSpringApplicationJson to pass the application properties as environment properties instead of merging them into SPRING_APPLICATION_JSON.
When deploying the stream, you can pass the Cloud Foundry deployer property deployer.*.cloudfoundry.useSpringApplicationJson=false to be applicable to all the applications inside the stream (using the wild card *).
Related
I'm using SCDF and i was wondering if there was any way to configure default properties for one application?
I got a task application registered in SCDF and this application gets some JDBC properties to access business database :
app.foo.export.datasource.url=jdbc:db2://blablabla
app.foo.export.datasource.username=testuser
app.foo.export.datasource.password=**************
app.foo.export.datasource.driverClassName=com.ibm.db2.jcc.DB2Driver
Do i really need to put this prop in a property file like this : (it's bit weird to define them during the launch)
task launch fooTask --propertiesFile aaa.properties
Also, we cannot use the rest API, credentials would appear in the url.
Or is there another way/place to define default business props for an application ? These props will be only used by this task.
The purpose is to have one place where OPS team can configure url and credentials without playing with the launch command.
Thank you.
Yeah, SCDF feels a bit weird in the configuration area.
As you wrote, you can register an application and create tasks, but all the configuration is passed at the first launch of the task. Speaking other way round, you can't fully install/configure a task without running it.
As soon as a task has run once, you can relaunch it without any configuration and it uses the configuration from before. The whole config is saved in the SCDF database.
However, if you try to overwrite an existing configuration property with a new value, SCDF seems to ignore the new value and continue to use the old one. No idea if this is intended by design or a bug or if we are doing something wrong.
Because we run SCDF tasks on Kubernetes and we are used to configure all infrastructure in YAML files, the best option we found was to write our own Operator for SCDF.
This operator works against the REST interface of SCDF and also compensates the weird configuration issues mentioned above.
For example the overwrite issue is solved by first deleting the configuration and recreate it with the new values.
With this operator we have reached what you are looking for: all our SCDF configuration is in a git repository and all changes are done through merge requests. Thanks to CI/CD, on the next launch, the new configuration is used.
However, a Kubernetes operator should be part of the product. Without it, SCDF on Kubernetes feels quite "alien".
I am referring to this page:
https://www.instana.com/docs/setup_and_manage/host_agent/updates/#update-interval
Is there a way to pass mode and time from outside as environment variables or any other way beside logging into the pod and manually changing the files inside etc/instana/com.instana.agent.main.config.UpdateManager.cfg file?
To whoever removed his/her answer: It was a correct answer. I don't know why you deleted it. Anyhow, I am posting again in case someone stumbles here.
You can control frequency and time by using INSTANA_AGENT_UPDATES_FREQUENCY and INSTANA_AGENT_UPDATES_TIME environment variables.
Updating mode via env variable is still unknown at this point.
Look at this page for more info: https://www.instana.com/docs/setup_and_manage/host_agent/on/docker/#updates-and-version-pinning
Most agent settings that one may want to change quickly are available as environment variables, see https://www.instana.com/docs/setup_and_manage/host_agent/on/docker. For example, setting the mode via environment variable is supported as well with INSTANA_AGENT_MODE, see e.g., https://hub.docker.com/r/instana/agent. The valid values are:
APM: the default, the agent monitors everything
INFRASTRUCTURE: the agent will collect metrics and entities but not traces
OFF: agent runs but collects no telemetry
AWS: agent will collect data about AWS managed services in a region and an account, supported on EC2 and Fargate, and with some extra configurations, on hosts outside AWS
On Kubernetes, it is also of course possible to use a ConfigMap to override files in the agent container.
I have a spring boot project that runs on 4 different environments. Each environment uses a different amount of datasources which will be implemented as environment variables through the properties file. How can I organize all these within my project?
From a Spring Boot application perspective, a configuration property can be fed from different sources. Such sources inclue environment variables, system properties, command line arguments... etc.
Spring Boot handles the different sources for you. Please look at this page for the details of Externalized configuration.
So, if your application expects a configuration property "x.y.z", you may simply define an environment variable "X_Y_Z" (in each of your target environments), and its value will be automatically taken into account in your application.
For the exact syntax of the environment variables to define, you might want to look at the documentation of relaxed binding.
We are using a third part security library which requires a system environment variable to be set. Can we pass as parameter in stream definition or while deploying stream separately for each app?
I didn't try this myself, but it looks like you can set the desired environment variable for the Spring Cloud Data Flow Server and then also add it to spring.cloud.deployer.local.envVarsToInherit.
I am using Spinnaker to deploy a 3-tier system to QA and then to Production. Configuration files in each of these systems point to others. If I bake in the configuration for QA in the AMI, then how do I change it while promoting to Prod? Is it 1) by having two different sets of AMIs - one for QA and one for Prod, or, 2) by having the AMIs with no configuration and then configure it (somehow) after deployment to change the configuration files?
What is recommended?
You can define custom AWS user data for cluster at deploy time ( under advanced settings of the cluster configuration ). You can then retrieve this user data in your application. This will allow you to change these type of configurations.
At Netflix, we have a series of init scripts that are baked into the base image and provide a mechanism for extending custom startup ( init.d ) scripts via nebula / gradle. This usually sets values like NETFLIX_ENVIRONMENT that are well known and programmed against.
We also use a feature flipping mechanism via https://github.com/Netflix/archaius . This allows us to add properties that are external to the clusters but can be targeted towards them.
When it comes to secured credentials, the approach is outlined in this presentation, but essentially the images reach out to an external service that issues these type of creds. https://speakerdeck.com/bdpayne/key-management-in-aws-how-netflix-secures-sensitive-data-without-its-own-data-center
I am struggling with similar problems myself in our company.
My solution was to create AMIs for specific purposes using a Packer script. This allows me to -
1. Configure the server as much as I can and then store those configurations in an AMI.
2. Easily change these configurations if the need arises.
Then, launching the AMI using an Ansible script, and make all the rest of the configurations on the specific instance.
In my case I chose creating different images for staging and production, but mostly because they differ greatly. If they were more alike I might have chosen using a single AMI for both.
The advantage Ansible gives you here is factoring your configurations, and including written once to both production and staging servers.