I have a requirement where I need to check for a file on the puppet master and copy it to the agent only if it is not empty.
I have the following so far:
exec {
'check_empty_file':
provider => shell,
command => "test -s puppet:////path/to/puppetmaster/file",
returns => ["0", "1"],
}
if $check_empty_file == '0' {
file {
'file_name':
path => '/path/to/agent/file',
alias => '/path/to/agent/file',
source => "puppet:///path/to/puppetmaster/file",
}
}
But it doesn't work. Any help is appreciated. Thanks!
You cannot use an Exec resource to perform the check, because you need to perform the evaluation during catalog building, and resources are not applied until after the catalog is built. Moreover, the test command tests for the existence of a the specified path. It does not know about URLs, and even if it did, it would be unlikely to recognize or handle the puppet: URL scheme. Furthermore, there is no association whatever between resource titles and variable names.
To gather data at catalog building time, you're looking for a puppet function. It is not that hard to add your own custom function to Puppet, but you don't need that for your case -- the built-in file() function will serve your purpose. It might look something like this:
$file_content = file('<module-name>/<file-name>')
if $file_content != '' {
file { '/path/to/target/file':
ensure => 'file',
content => $file_content,
# ...
}
}
Related
Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.
My motivation: our codebase is scattered across over at least 20 git repos. I want to consolidate everything into a single git repo with a single build system. Currently we use SBT, but we think the build would take too long, so I am examining the possibility of using Bazel instead.
Most of our codebase uses Scala 2.12, some of our codebase uses Scala 2.11, and the rest needs to build under both Scala 2.11 and Scala 2.12.
I'm trying to use bazelbuild/rules_scala.
With the following call to scala_repositories in my WORKSPACE, I can build using Scala 2.12:
scala_repositories(("2.12.6", {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}))
If I have the following call instead, I can build using Scala 2.11:
scala_repositories(("2.11.12", {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04"
}))
However, it is not possible to specify in my BUILD files on a package level which version(s) of Scala to build with. I must specify this globally in my WORKSPACE.
To workaround this, my plan is to set up configurable attributes, so I can specify --define scala=2.11 to build with Scala 2.11, and specify --define scala=2.12 to build with Scala 2.12.
First I tried by putting this code in my WORKSPACE:
config_setting(
name = "scala-2.11",
define_values = {
"scala": "2.11"
}
)
config_setting(
name = "scala-2.12",
define_values = {
"scala": "2.12"
}
)
scala_repositories(
select(
{
"scala-2.11": "2.11.12",
"scala-2.12": "2.12.6"
}
),
select(
{
"scala-2.11": {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04",
},
"scala-2.12": {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}
}
)
)
But this gave me the error config_setting cannot be in the WORKSPACE file.
So then I tried moving code into a Starlark file.
In tools/build_rules/scala.bzl:
config_setting(
name = "scala-2.11",
define_values = {
"scala": "2.11"
}
)
config_setting(
name = "scala-2.12",
define_values = {
"scala": "2.12"
}
)
def scala_version():
return select(
{
"scala-2.11": "2.11.12",
"scala-2.12": "2.12.6"
}
)
def scala_machinery():
return select(
{
"scala-2.11": {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04",
},
"scala-2.12": {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}
}
)
And back in my WORKSPACE:
load("//tools/build_rules:scala.bzl", "scala_version", "scala_machinery")
scala_repositories(scala_version(), scala_machinery())
But now I get this error:
tools/build_rules/scala.bzl:1:1: name 'config_setting' is not defined
This confuses me, because I thought config_setting() was built in. I can't find where I should load it in from.
So, my questions:
How do I load config_setting() into my .bzl file?
Or, is there a better way of controlling from the command line which arguments get passed to scala_repositories()?
Or, is this just not possible?
$ bazel version
Build label: 0.17.2-homebrew
Build target: bazel-out/darwin-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Fri Sep 28 10:42:37 2018 (1538131357)
Build timestamp: 1538131357
Build timestamp as int: 1538131357
If you call native code from a bzl file, you must use the native. prefix, so in this case you would call native.config_setting.
However, this is going to lead to the same error: config_setting is a BUILD rule, not a WORKSPACE rule.
If you want to change the build tool used for a particular target, you can change the toolchain, and this seems to be supported via the scala_toolchain
And I believe you can use a config to select the toolchain.
I'm unfamiliar with what scala_repositories does. I hope it defines the toolchain with a proper versioned name, so that you can reference the wanted toolchain correctly. And I hope you can invoke it twice in the same workspace, otherwise I think there is no solution.
I have the following setup:
one es-docker (live)
one es-docker (working)
So i wish that the working docker can run some data changes and save this inside the es application. (This changes will run over a few hours).
After this changes are done i wish to copy the working-docker (with all data) and override the live-docker.
So i can run the changes over some hours without having a downtime on live (or a minimalistic downtime).
But i don't know how to "copy" the original included all data.
Thank you for your hints.
The Elasticsearch Definitive Guide outlines a process to achieve zero downtime for use cases like yours, making use of Index Aliases.
The idea is to create an Index Alias that your applications will always use to access the live data.
Given an alias named "alias1" that is pointing to an index named "index1", perform the following steps:
Create a new index, named "index2"
Run your batch indexing process
Swap "alias1" to point to "index2"
Clean up "index1"
The alias swapping occurs in a single call, and Elasticsearch performs the action atomically, giving you the zero downtime you desire. The call looks something like this:
POST /_aliases
{
"actions" : [
{ "remove" : { "index" : "index1", "alias" : "alias1" } },
{ "add" : { "index" : "index2", "alias" : "alias1" } }
]
}
First, I came from a .NET background so please excuse my lack of groovy lingo. Back when I was in a .NET shop, we were using TypeScript with C# to build web apps. In our controllers, we would always receive/respond with DTOs (data xfer objects). This got to be quite the headache every time you create/modify a DTO you had to update the TypeScript interface (the d.ts file) that corresponded to it.
So we created a little app (a simple exe) that loaded the dll from the webapp into it, then reflected over it to find the DTOs (filtering by specific namespaces), and parse through them to find each class name within, their properties, and their properties' data types, generate that information into a string, and finally saved as into a d.ts file.
This app was then configured to run on every build of the website. That way, when you go to run/debug/build the website, it would update your d.ts files automatically - which made working with TypeScript that much easier.
Long story short, how could I achieve this with a Grails Website if I were to write a simple groovy app to generate the d.ts that I want?
-- OR --
How do I get the IDE (ex IntelliJ) to run a groovy file (that is part of the app) that does this generation post-build?
I did find this but still need a way to run on compile:
Groovy property iteration
class Foo {
def feck = "fe"
def arse = "ar"
def drink = "dr"
}
class Foo2 {
def feck = "fe2"
def arse = "ar2"
def drink = "dr2"
}
def f = new Foo()
def f2 = new Foo2()
f2.properties.each { prop, val ->
if(prop in ["metaClass","class"]) return
if(f.hasProperty(prop)) f[prop] = val
}
assert f.feck == "fe2"
assert f.arse == "ar2"
assert f.drink == "dr2"
I've been able to extract the Domain Objects and their persistent fields via the following Gant script:
In scripts/Props.groovy:
import static groovy.json.JsonOutput.*
includeTargets << grailsScript("_GrailsBootstrap")
target(props: "Lists persistent properties for each domain class") {
depends(loadApp)
def propMap = [:].withDefault { [] }
grailsApp.domainClasses.each {
it?.persistentProperties?.each { prop ->
if (prop.hasProperty('name') && prop.name) {
propMap[it.clazz.name] << ["${prop.name}": "${prop.getType()?.name}"]
}
}
}
// do any necessary file I/O here (just printing it now as an example)
println prettyPrint(toJson(propMap))
}
setDefaultTarget(props)
This can be run via the command line like so:
grails props
Which produces output like the following:
{
"com.mycompany.User": [
{ "type": "java.lang.String" },
{ "username": "java.lang.String" },
{ "password": "java.lang.String" }
],
"com.mycompany.Person": [
{ "name": "java.lang.String" },
{ "alive": "java.lang.Boolean" }
]
}
A couple of drawbacks to this approach is that we don't get any transient properties and I'm not exactly sure how to hook this into the _Events.groovy eventCompileEnd event.
Thanks Kevin! Just wanted to mention, in order to get this to run, here are a few steps I had to make sure to do in my case that I thought I would share:
-> Open up the grails BuildConfig.groovy
-> Change tomcat from build to compile like this:
plugins {
compile ":tomcat:[version]"
}
-> Drop your Props.groovy into the scripts folder on the root (noting the path to the grails-app folder for reference)
[application root]/scripts/Props.groovy
[application root]/grails-app
-> Open Terminal
gvm use grails [version]
grails compile
grails Props
Note: I was using Grails 2.3.11 for the project I was running this on.
That gets everything in your script to run successfully for me. Now to modify the println portion to generate Typescript interfaces.
Will post a github link when it is ready so be sure to check back.
In our Grails web applications, we'd like to use external configuration files so that we can change the configuration without releasing a new version. We'd also like these files to be outside of the application directory so that they stay unchanged during continuous integration.
The last thing we need to do is to make sure the external configuration files exist. If they don't, then we'd like to create them, fill them with predefined content (production environment defaults) and then use them as if they existed before. This allows any administrator to change settings of the application without detailed knowledge of the options actually available.
For this purpose, there's a couple of files within web-app/WEB-INF/conf ready to be copied to the external configuration location upon the first run of the application.
So far so good. But we need to do this before the application is initialized so that production-related modifications to data sources definitions are taken into account.
I can do the copy-and-load operation inside the Config.groovy file, but I don't know the absolute location of the WEB-INF/conf directory at the moment.
How can I get the location during this early phase of initialization? Is there any other solution to the problem?
There is a best practice for this.
In general, never write to the folder where the application is deployed. You have no control over it. The next rollout will remove everything you wrote there.
Instead, leverage the builtin configuration capabilities the real pro's use (Spring and/or JPA).
JNDI is the norm for looking up resources like databases, files and URL's.
Operations will have to configure JNDI, but they appreciate the attention.
They also need an initial set of configuration files, and be prepared to make changes at times as required by the development team.
As always, all configuration files should be in your source code repo.
I finally managed to solve this myself by using the Java's ability to locate resources placed on the classpath.
I took the .groovy files later to be copied outside, placed them into the grails-app/conf directory (which is on the classpath) and appended a suffix to their name so that they wouldn't get compiled upon packaging the application. So now I have *Config.groovy files containing configuration defaults (for all environments) and *Config.groovy.production files containing defaults for production environment (overriding the precompiled defaults).
Now - Config.groovy starts like this:
grails.config.defaults.locations = [ EmailConfig, AccessConfig, LogConfig, SecurityConfig ]
environments {
production {
grails.config.locations = ConfigUtils.getExternalConfigFiles(
'.production',
"${userHome}${File.separator}.config${File.separator}${appName}",
'AccessConfig.groovy',
'Config.groovy',
'DataSource.groovy',
'EmailConfig.groovy',
'LogConfig.groovy',
'SecurityConfig.groovy'
)
}
}
Then the ConfigUtils class:
public class ConfigUtils {
// Log4j may not be initialized yet
private static final Logger LOG = Logger.getGlobal()
public static def getExternalConfigFiles(final String defaultSuffix, final String externalConfigFilesLocation, final String... externalConfigFiles) {
final def externalConfigFilesDir = new File(externalConfigFilesLocation)
LOG.info "Loading configuration from ${externalConfigFilesDir}"
if (!externalConfigFilesDir.exists()) {
LOG.warning "${externalConfigFilesDir} not found. Creating..."
try {
externalConfigFilesDir.mkdirs()
} catch (e) {
LOG.severe "Failed to create external configuration storage. Default configuration will be used."
e.printStackTrace()
return []
}
}
final def cl = ConfigUtils.class.getClassLoader()
def result = []
externalConfigFiles.each {
final def file = new File(externalConfigFilesDir, it)
if (file.exists()) {
result << file.toURI().toURL()
return
}
final def error = false
final def defaultFileURL = cl.getResource(it + defaultSuffix)
final def defaultFile
if (defaultFileURL) {
defaultFile = new File(defaultFileURL.toURI())
error = !defaultFile.exists();
} else {
error = true
}
if (error) {
LOG.severe "Neither of ${file} or ${defaultFile} exists. Skipping..."
return
}
LOG.warning "${file} does not exist. Copying ${defaultFile} -> ${file}..."
try {
FileUtils.copyFile(defaultFile, file)
} catch (e) {
LOG.severe "Couldn't copy ${defaultFile} -> ${file}. Skipping..."
e.printStackTrace()
return
}
result << file.toURI().toURL()
}
return result
}
}