I am starting to look at Circle CI to build my projects. At the moment we are using octopus deploy, but want to use something new.
Today we have a appsettings file eg. "Appsettings.json"
Here we have structure eg:
"ConnectionStrings": {
"DatabaseConnectionString": "MyLocalConnectionString",
"MessageBusConnectionString": "MyLocalConnectionString2"
},
"MessageBus": {
"Sqs": {
"DefaultQueu" : "LocalTestQueu",
"ErrorQueu": "LocalErrorQueu"
}
},
...
I want to replace all values with new ones.
Eg: DefaultQueu is the name of the key and I want LocalTestQueu value to be changed to "MyProductionQueu"
For example key in CircleCi would be something like:
MessageBus.Sqs.DefaultQueu = MyProductionQueu
and
ConnectionStrings.DatabaseConnectionString = MyProductionDatabaseConnectionString
How would I do that?
I know there is environment variables where I can do something like:
"DatabaseConnectionString": "$MyConnectionString"
where simply string replace $MyConnectionString with the real connection string. But that is not what I am looking for.
We have all our local connectionstrings stored in source control. So we need key / value replacement as described as before.
Octopus let's us do something like this:
There is no support for this at the moment.
Related
Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.
I'm new to jenkins and inherited a bunch of declarative pipelines of unknown code quality. Each pipeline uses folder properties to set shared default param values. This puts essential variables outside of source control, which kills our PR process and our history for debugging. For example
//pipelineA/Jenkinsfile
pipeline {
parameters {
string name: 'important_variable', defaultValue: folderProperty('important_variable')
}
//etc
}
//pipelineB/Jenkinsfile
pipeline {
parameters {
string name: 'important_variable', defaultValue: folderProperty('important_variable')
}
//etc
}
Then in the root folder a property important_variable is set to "Hello World"
Is there a way to get this into source control either by setting the folder property to extract the variable from a yaml, or by using shared libraries?
Thank you for any help!
In case anyone reads this, we ended up:
Create a bootstrap.groovy file
This file MUST go in a /vars directory at the absolute top of your repo
Using the Jenkins UI we went to the pipeline's parent directory > config and created a shared library called config-lib that points at our repo and automatically exposes the bootstrap.groovy file methods as long as the file is in the right place
The bootstrap.groovy file has a call method that returns a map with key value pairs for our default parameters. This method has to be named call
In the Jenkinsfile for the pipeline we include the following two lines:
#Library("config-lib") _
config = bootstrap()
The library decorator (note it ends with _) imports the config-lib methods defined in the jenkins ui
The bootstrap function calls the call method from the bootstrap.groovy file in that config-lib library
in the Jenkinsfile use the config map to populate the param default values
pipeline {
parameters {
string name: 'foo', defaultValue: config.foo
}
And it's done.
This video helped immensely: https://youtu.be/Wj-weFEsTb0
I am trying to use a global class that I've defined in a shared library to help organise job parameters. It's not working, and I'm not even sure if it is possible.
My job looks something like this:
pipelineJob('My-Job') {
definition {
// Job definition goes here
}
parameters {
choiceParam('awsAccount', awsAccount.ALL)
}
}
In a file in /vars/awsAccount.groovy I have the following code:
class awsAccount implements Serializable {
final String SANDPIT = "sandpit",
final String DEV = "dev",
final String PROD = "prod"
static String[] ALL = [SANDPIT, DEV, PROD]
}
Global pipeline libraries are configured to load implicitly from the my repository's master branch.
When attempting to update the DSL scripts I receive the error:
ERROR: (myJob.groovy, line 67) No such property: awsAccount for class: javaposse.jobdsl.dsl.helpers.BuildParametersContext
Why does it not find the class, and is it even possible to use shared library classes like this in pipeline job?
Disclaimer: I know it works using Jenkinsfile. Unfortunatelly, not tested usng Declarative Pipelines - but no answers yet, so it may be worth a try
Regarding your first question: there are some reasons why a class from your shared-lib could not be found. Starting from the library import, the library syntax, etc. But they definitvely work for DSL. To be more precise about it, additional information would be great. But be sure that:
You have your groovy class definition using exactly the directory structure as described in the documentation (https://www.jenkins.io/doc/book/pipeline/shared-libraries/)
Give a name to the shared-lib in jenkins as you configure it and be sure is exactly the name you use in the import
Use the import as described in the documentation (under Using Libraries)
Regarding your second question (the one that names this SO question): yes, you can include parameter jobs from information in your shared-lib. At least, using Jenkinsfiles. You can even define properties to be included in the pipelie. I got it working with a tricky syntax due to different problems.
Again, I am using Jenkinsfile and this is what worked for me:
In my shared-lib class, I added a static function that introduces the build parameters. Notice the input parameters that function needs and its usage:
class awsAccount implements Serializable {
//
static giveMeParameters (script) {
return [
// Some parms
script.string(defaultValue: '', description: 'A default parameter', name: 'textParm'),
script.booleanParam(defaultValue: false, description: 'If set to True, do whatever you need - otherwise, do not do it', name: 'boolOption'),
]
}
}
To introduce those parameters in the pipeline, you need to place the returned value of the function into the parameters array
properties (
parameters (
awsAccount.giveMeParameters (this)
)
Again, notice the syntax when calling the function. Similar to this, you can also define functions in the shared-lib that return properties and use them in multiple jobs (disableConcurrentBuilds, buildDiscarder, etc)
I have a multi-branch pipeline job set to build by Jenkinsfile every minute if new changes are available from the git repo. I have a step that deploys the artifact to an environment if the branch name is of a certain format. I would like to be able to configure the environment on a per-branch basis without having to edit Jenkinsfile every time I create a new such branch. Here is a rough sketch of my Jenkinsfile:
pipeline {
agent any
parameters {
string(description: "DB name", name: "dbName")
}
stages {
stage("Deploy") {
steps {
deployTo "${params.dbName}"
}
}
}
}
Is there a Jenkins plugin that will let me define a default value for the dbName parameter per branch in the job configuration page? Ideally something like the mock-up below:
The values should be able to be reordered to set priority. The plugin stops checking for matches after the first one. Matching can be exact or regex.
If there isn't such a plugin currently, please point me to the closest open-source one you can think of. I can use it as a basis for coding a custom plugin.
A possible plugin you could use as a starting point for a custom plugin is the Dynamic Parameter Plugin
Here is a workaround :
Using the Jenkins Config File Provider plugin create a config json with parameters defined in it per branch. Example:
{
"develop": {
"dbName": "test_db",
"param2": "value"
},
"master": {
"dbName": "prod_db",
"param2": "value1"
},
"test_branch_1": {
"dbName": "zss_db",
"param2": "value2"
},
"default": {
"dbName": "prod_db",
"param2": "value3"
}
}
In your Jenkinsfile:
final commit_data = checkout(scm)
BRANCH = commit_data['GIT_BRANCH']
configFileProvider([configFile(fileId: '{Your config file id}', variable: 'BRANCH_SETTINGS')]) {
def config = readJSON file:"$BRANCH_SETTINGS"
def branch_config = config."${BRANCH}"
if(branch_config){
echo "using config for branch ${BRANCH}"
}
else{
branch_config = config.default
}
echo branch_config.'dbName'
}
You can then use branch_config.'dbName', branch_config.'param2' etc. You can even set it to a global variable and then use throughout your pipeline.
The config file can easily be edited via the Jenkins UI(Provided by the plugin) to provision for new branches/params in the future. This doesn't need access to any non sandbox methods.
Not really an answer to your question, but possibly a workaround...
I don't know that the rest of your parameter list looks like, but if it is a static list, you could potentially have your static list with a "use Default" option as the first one.
When the job is run, if the value is "use Default", then gather the default from a file stored in the SCM branch and use that.
I'm developing my first Grails plugin. It has to access a webservice. The Plugin will obviously need the webservice url. What is the best way to configure this without hardcoding it into the Groovy classes? It would be nice with different config for different environments.
You might want to Keep It Simple(tm). You may define the URL directly in Config.groovy -including per-environment settings- and access it from your plugin as needed using grailsApplication.config (in most cases) or a ConfigurationHolder.config object (See further details in the manual).
As an added bonus that setting may also be defined in standard Java property files or on other configuration files specified in grails.config.locations.
e.g. in Config.groovy
// This will be the default value...
myPlugin.url=http://somewhe.re/test/endpoint
environments {
production {
// ...except when running in production mode
myPlugin.url=http://somewhe.re/for-real/endpoint
}
}
later, in a service provided by your plugin
import org.codehaus.groovy.grails.commons.ConfigurationHolder
class MyPluginService {
def url = ConfigurationHolder.config.myPlugin.url
// ...
}
If its only a small (read: one item) config option, it might just be easier to slurp in a properties file. If there are some number of configuration options, and some of them should be dynamic, i would suggest doing what the Acegi Security plugin does - add a file to /grails-app/conf/plugin_name_config.groovy perhaps.
added bonus is that the user can execute groovy code to compute their configuration options (much better over using properties files), as well as being able to do different environments with ease.
check out http://groovy.codehaus.org/ConfigSlurper , which is what grails internally use to slurp configs like config.groovy.
//e.g. in /grails-app/conf/MyWebServicePluginConfig.groovy
somePluginName {
production {
property1 = "some string"
}
test {
property1 = "another"
}
}
//in your myWebServicePlugin.groovy file, perhaps in the doWithSpring closure
GroovyClassLoader classLoader = new GroovyClassLoader(getClass().getClassLoader())
ConfigObject config
try {
config = new ConfigSlurper().parse(classLoader.loadClass('MyWebServicePluginConfig'))
} catch (Exception e) {/*??handle or what? use default here?*/}
assert config.test.property1.equals("another") == true