How to create different stages or environments in single serverless.yml file using serverless framework? - serverless

I am new to using serverless framework ,I wanted to create three different environments dev,Qa,prod. How do i create in single serverless.yml file ? The provider is AWS.

You can pass a stage CLI option that can be interpolated wherever needed in your serverless.yml file. For example, take the following CLI command:
serverless --stage dev deploy
This can be accessed in serverless.yml with ${opt:stage}. I usually include this under provider:
provider:
stage: ${opt:stage}
Then you can get the value of the stage option anywhere in serverless.yml using ${self:provider.stage}.
When lambdas are deployed, their ARNs are automatically constructed with prefixes of the service name (as defined by the service key in serverless.yml), the stage, and the lambda name, like the following:
arn:aws:lambda:us-east-1:010101010101:function:myservice-dev-mylambdaname
So you can simply run the deploy CLI command for the other two stages/environments you want, and you'll have a set of separate resources for each environment.

Related

Passing arguments to Docker build while deploying AppEngine flex

I'm wondering if it's possible to feed arguments, or environment variables into a Dockerfile used by AppEngine (flex environment).
I'd like to use this command:
COPY ${STAGE}/keycloak-files/realm-config/* /opt/jboss/keycloak/realm-config/
"STAGE" variable would allow to select the origin (I have a "staging" and "production" directory, containing different configurations).
I've got two different app.yml files, one for each environment, but from what I read online, environment variables are not exposed to the Dockerfile at build time.
People suggest to pass arguments to accomplish the task. But how would that be possible with appengine, where we don't execute the docker build command directly?
As #DamPlz said there is not a straight way to pass env variables from the app.yaml to the Dockerfile during the deployment phase . Here are some workarounds that I could think of:
One option could be to create the variable in the Dockerfile directly and if you want to change it each time at runtime you can use a placeholder value and have a script update the value of the variable before running “gcloud app deploy”.
On the other hand you could use build triggers in Google Cloud Registry to modify it in the Docker image using user-defined substitutions.

Variables in azure iot edge deployment template

In Azure Iot edge deployment template json, I see a variables named MODULES.SampleModule or MODULES.SampleModule.json. An example is at this link. There is JSON schema at iotedge repository. But I am not able to find a reference documentation for the variables that could be used in an azure deployment template json.
From where is the variable MODULES.SampleModule populated?
Is there a reference documentation for azure deployment template variables?
For the deployment template, every variable must be in this syntax ${var_name}.
When running the deployment file generation, the variables will be replace by the host environment variable with the same name.
For example, I have ${CONTAINER_REGISTRY_USERNAME} and ${CONTAINER_REGISTRY_PASSWORD} in my deployment.template.json. On my host machine I have, CONTAINER_REGISTRY_USERNAME and CONTAINER_REGISTRY_PASSWORD set has environment variable. When the task is run, they are replaced.
To make it work in a pipeline, you must add the environnement variables on the host machine. In AzureDevOps, you must create a library (https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml).
In your pipeline, you can use this task to make the generation :
- task: AzureIoTEdge#2
displayName: 'Generate deployment manifest - amd64'
inputs:
action: 'Generate deployment manifest'
templateFilePath: 'EdgeModule/deployment.template.json'
defaultPlatform: 'amd64'
deploymentManifestOutputPath: '$(System.DefaultWorkingDirectory)/config/deployment-amd64.json'
validateGeneratedDeploymentManifest: 'true'
fillRegistryCredential: 'true'
I have a github project with a default iotedge template including a full pipeline for unit test, code coverage, build and push image, generate deployment manifest and deploy to iothub.
https://github.com/MaxThom/IoTEdgeModule-Template
Good luck :)
This project was built with the Visual Studio Code IoT Edge extension (https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). It is the template that is used to generate the actual IoT Edge deployment file. When the extension creates the deployment file from this template it will substitute those variables with the required values. This is an implementation detail that is not exposed and thus undocumented.

Jenkins pipeline to change appssettings.json file and build according to the environment

I had a requirment to build a console application, but i need to change some values in appssettings.json file according to the environment and then build it. I am new to jenkins and want to know how to acheive this.
for dev change values in json file and build it -> for test again change the json values and build it -> till prod
This can be done in multiple ways for example (the common idea between these is to check the incoming branch):
You might find better ways to do it but you can use this as a start.
Using bash, jq, sponge through sh step:
Create a json file as a template like the following (consider keeping this file in a version control to clone every build)
# settings.json
{
environment: 'ENVIRONMENT_NAME',
appVersion: 'APP_VERSION'
}
Check the branch name value through if condition and update the template according to the branch value
jq '.environment = "branch_name"' settings.json|sponge settings.json
Use the customized settings.json in your application's code
Using Config File Provider Plugin which can be used inside the Jenkins pipeline as the following (also update it based on the branch name)
configFileProvider([configFile(fileId: 'FILE_ID', targetLocation: 'FILE_LOCATION')]) {}
Check if the application framework can make use of environment variables.

Access OpenShift template parameter inside Dockerfile

Will OpenShift build parameters (with the OpenShift Docker build strategy) be automatically exposed to Dockerfiles as Docker build arguments (ARG) or environment variables (ENV), or does this need explicit configuration in a BuildConfig e.g. in these places:
oc explain bc.spec.strategy.dockerStrategy.buildArgs
oc explain bc.spec.strategy.dockerStrategy.env
The reason I'm asking is that I have a template with several parameters, no explicit configuration yet, but an apparent situation where some parameters are accessible inside the Dockerfile (as $VAR) and others are not ($VAR is empty). I would like to understand normal behavior, before I debug my situation further.
UPDATE I've now added a oc explain bc.spec.strategy.dockerStrategy.buildArgs section for the "missing" parameter to the template like so:
strategy:
type: Docker
dockerStrategy:
buildArgs:
- name: VAR
value: ${VAR}
but its value is still empty inside the built container when I would have expected it to be true (because I started the build with oc new-app ... VAR=true. So something else must be wrong too.
This turned out to be a side-effect of my perhaps particular way of employing OpenShift's Docker build strategy.
I maintain a separate Dockerfile in a file and use a script patch.sh for inserting it into the uploaded template. This is convenient, because a Dockerfile stored in oc explain bc.spec.source.dockerfile needs certain escaping (in YAML representation) and the script takes care of this. If I must update the Dockerfile (as happens frequently during development) I just edit the file using verbatim Dockerfile syntax and then re-run the script.
The script also takes care of removing certain argument definitions from the Dockerfile (e.g. ARG VAR) and replacing references to them with references to corresponding OpenShift template parameters (e.g. $VAR to ${VAR}). The idea is for the script to turn a Dockerfile that would also be suitable for a stand-along Docker environment into one that can serve OpenShift's Docker build strategy with template parallelization.
The actual error occurred because I had added a new template parameter but not jet adjusted the script accordingly. The situation is now back to normal.
UPDATE I've now removed the special logic for manipulating arguments from my patch script and introduced build arguments under bc.spec.strategy.dockerStrategy.buildArgs instead. Entries look like this:
buildArgs:
- name: VAR
value: ${VAR}
So basically, now the build configuration does the copying (instead of my patch script the substitution).

What is the best way to change application configurations in a CI environment

I am currently doing a POC on Jenkins pipeline to figure out how to configure my product in a CI environment. The requirements of the pipeline are:
Checkout code from SVN
Compile the program
Deploy to a predefined location on the server
Change DB configurations (& maybe even other configs not identified yet) to point to the appropriate DB
Execute the program
Execute QA process to validate the output
I am currently having difficulty in achieving Point 4 above. All DB-related configurations reside in a database.xml file per program & a program can connect to 1 or more DBs.
Given that developers are free to check-in any DB configurations, I would still like my CI environment to point to a predefined DB to test against. I am unsure on how to dynamically change these configuration files to achieve this.
Please let me know if there are standard methods that others are also using to achieve the same.
TIA
Some approaches:
Properties using Advanced Platforms
Use some web platform like :
zookeeper
http://www.therore.net/java/2015/05/03/distributed-configuration-with-zookeeper-curator-and-spring-cloud-config.html
Spring Cloud
https://www.baeldung.com/spring-cloud-configuration
This is a java spring framework functionality in wich you can create properties file with configurations and configure your applications to read them.
magi-properties-management
This is a java web system in which you can create environments and any key:value in each one. You just need configure your application in any language to read this values.
cyber-properties-management
This is a nodejs application that allows you to store properties files (.properties .yml or .json) and then just consume them as rest endpoint from your applications.
With this approaches , when a change of configurations is required, you just need update the value in the system and restart your application. It is even possible a hot reload in java applications.
Properties from Environment variables
You can export your key:value properties as environment vars before starting the application :
export DATABASE_HOST=10.100.200.300
export LOG_DIR_LOCATION=/logs
And read it after after the application has started:
Java >> System.getEnv("DATABASE_HOST");
node.js >> process.evn.LOG_DIR_LOCATION
php >> getenv('DATABASE_HOST')
Properties from SCM
Create some svn repositoty called development-configurations
Upload your database.xml with development values
In your application, put a database.xml with dummy values : localhost, etc
Create a jenkins job and put the environment as an argument.
In the same job download svn source code of your application.
download svn repository called $environment-configurations. $environment will be your argument
replace the database.xml inside of your application with database.xml of $environment-configurations repository.
Just create another repositories for testing, uat and production. Job must be receive environment as an argument to choose the right database.xml
Properties from Database
Modify your applications to read configurations from some database instead of xml file
Properties from File System
Modify your application to read an external database.xml instead of the database.xml inside of your source code. With this approach you just need put the database.xml in some path of your server and delete it from your application source code.
Note
You can use these approaches not only for backend apps. You can use them for frontends applications:
Devops Variable Substitution for Frontend js applications

Resources