I need to pass a configuration value from one CDK stack to another and use that value in the second stack's construct code bundling step. E.g. first stack is S3 bucket and the second stack is Lambda#Edge function which doesn't support environment variables and needs to embed S3 bucket name in code during bundling in a custom construct. When I do this I get values like ${Token[TOKEN.214]} instead of a real bucket name.
let bucket: Bucket
// This function builds and bundles the code for Lambda#Edge
buildLambdaCode(..., { env: { S3_BUCKET: bucket.bucketName } })
Indeed, CDK-generated resource names are Tokens at code-time, CloudFormation Refs at synth-time, and "actual names" only at deploy-time. So your options to pass the bucket name to your Lambda#Edge are:
Environment Variable: the default solution, but Lambda#Edge does not support them*, as you say.
SSM Parameter + SDK Call: another typical solution, but the added latency defeats the purpose of the edge function.
CloudFront Custom Header: as in this SO Question. You can set customHeaders in CDK Cloudfront Origin constructs.
Hardcode the Bucket Name: the best-practice gods may forgive you setting an explicit bucketName in this case.
* The experimental cloudfront.experimental.EdgeFunction construct dangerously permits env var props (error or ignored at deploy?). The stable cloudfront.Function does not, in line with the service docs.
Related
I am trying to make a POC and I'm such making a really simple use-case.
In there, I use a src/lib/db.ts who, for our interest, contains this code
console.log(import.meta.env.MONGO_URI, import.meta.env.SSR);
giving
undefined true
Of course, my .env file contains a definition for MONGO_URI, I tried with VITE_MONGO_URI and could see the value.
I know a way to expose it is to use VITE_MONGO_URI but my point is exactly not to expose it on the client-side.
I checked and the file db.ts is not bundled with the client, even the import.meta.env.SSR being true shows that the bundler knows it's happening on the server.
Question: How to access my private environment variables server-side ?
EDIT: As specified by Shriji Kondan, the API for this purpose has been created now : here
You could use dotenv on the server side, assuming you are using node-adapter, you can have a file _constants.ts in your app
import 'dotenv/config';
export const MONGO_URI = process.env.MONGO_URI;
and then import this variable into your script.
It's not very awesome to put secrets on client-side code. It should be either utilities.ts with a performed action SUPER_SECRET_API_KEY="$ecret#p1Key" in .env file, then request it via in src/lib/utilities/utility.js as explained here :
import { SUPER_SECRET_API_KEY } from '$env/static/private';
export function performApiAction() {
const apiInstance = initialiseApi({key: SUPER_SECRET_API_KEY});
}
or from page.server.ts via form actions as stated here which is preferable way but it's more complex.
I am trying to create a CdkPipeline with multiple source CodeCommit repository.
I followed instructions from cdkworkshop to successfully create a self-mutating pipeline with single CodeCommit repository but cannot figure out how to add more packages (CodeCommit repository) inside the source stage.
I did see examples from https://docs.aws.amazon.com/cdk/api/latest/docs/aws-codepipeline-actions-readme.html#build--test but this does not provide CDK's self-mutating capability.
This example https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html#codepipeline_example_stack seems a bit more promising but it looks too manual.
Any help would be appreciated.
Late to the response, but here is the solution from the AWS Documentation
If you are using a ShellStep for your initial step, utilize the additional_inputs property:
additional_inputs (Optional[Mapping[str, IFileSetProducer]]) – Additional FileSets to put in other directories. Specifies a mapping from directory name to FileSets. During the script execution, the FileSets will be available in the directories indicated. The directory names may be relative. For example, you can put the main input and an additional input side-by-side with the following configuration:: const script = new pipelines.ShellStep(‘MainScript’, { commands: [‘npm ci’,’npm run build’,’npx cdk synth’], input: pipelines.CodePipelineSource.gitHub(‘org/source1’, ‘main’), additionalInputs: { ‘../siblingdir’: pipelines.CodePipelineSource.gitHub(‘org/source2’, ‘main’), } }); Default: - No additional inputs
In the above example, it shows you how to add an additional GitHub repository as an input, and maps it to the file to ../siblingdir in your ShellStep.
This will work for some types of Steps (Example for CodeBuildStep), but it's recommended to check the documentation to confirm.
I have a truststore file(a binary file) that I need to provide during helm upgrade. This file is different for each target env(dev,qa,staging or prod). So I can only provide this file at time of deployment. helm upgrade --set-file does not take a binary file. This seem to be the issue I found here: https://github.com/helm/helm/issues/3276. This truststore files are stored in Jenkins Credential store.
As the command itself is described below:
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
it is also important to know The Format and Limitations of
--set.
The error you see: Error: failed parsing --set-file data... means that the file you are trying to use does not meet the requirements. See the example below:
--set-file key=filepath is another variant of --set. It reads the
file and use its content as a value. An example use case of it is to
inject a multi-line text into values without dealing with indentation
in YAML. Say you want to create a brigade project with certain value
containing 5 lines JavaScript code, you might write a values.yaml
like:
defaultScript: |
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
Being embedded in a YAML, this makes it harder for you to use IDE
features and testing framework and so on that supports writing code.
Instead, you can use --set-file defaultScript=brigade.js with
brigade.js containing:
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
I hope it helps.
Is there an easy way to get hold of a path object so I can check if a given label path exists. Say for example if path.exists("#external_project_name//:filethatmightexist.txt"):. I can see that the repository context has this. But I need to have a wrapping repository rule. Is it possible to do this in a macro or Skylark native call instead?
Even with a repository_rule, I had a lot of trouble with this due to what you already pointed out:
if you create a Label with a path that doesn't exist, it will cause the build to fail
But if you're willing to do a repository rule, here's a possible solution...
In this example, my rule allows specification of a default configuration if a config file is not present. The configuration can be checked into .gitignore and overridden for individual developers, but work out of the box for most cases.
I think I understand why the ctx.actions have sibling arguments now, same idea here. The trick is config_file_location is a true label, and then config_file is a string attribute. I chose BUILD arbitrarily, but since all workspaces have a top level BUILD that's public seemed legit-ish.
WORKSPACE Definition
...
workspace(name="E02_mysql_database")
json_datasource_configuration(name="E02_datasources",
config_file_location="#E02_mysql_database//:BUILD",
config_file="database.json")
The definition for json_datasource_configuration looks like this:
json_datasource_configuration = repository_rule(
attrs = {
"config_file_location": attr.label(
doc="""
Path relative to the repository root for a datasource config file.
"""),
"config_file": attr.string(
doc="""
Config file, maybe absent
"""),
"default_config": attr.string(
# better way to do this?
default="None",
doc = """
If no config is at the path, then this will be the default config.
Should look something like:
{
"datasource_name": {
"host": "<host>"
"port": <port>
"password": "<password>"
"username": "<username>"
"jdbc_connection_string": "<optional>"
}
}
There can be more than datasource configured... maybe, eventually.
""",
),
},
local = True,
implementation = _json_config_impl,
)
Then in the rule I can test for the file existence, and if not present, do other logic.
def _json_config_impl(ctx):
"""
Allows you to specify a file on disk to use for data connection.
If you pass a default
"""
config_path = ctx.path(ctx.attr.config_file_location).dirname.get_child(ctx.attr.config_file)
config = ""
if config_path.exists:
config = ctx.read(config_path)
elif ctx.attr.default_config == "None":
fail("Could not find config at %s, you must supply a default_config if this is intentional" % ctx.attr.config_file)
else:
config = ctx.attr.default_config
...
probably too late to help, but your question is the only thing I found referencing this goal. If someone knows a better way I am looking for other options. It's complicated to explain to other developers why the rule has to work the way it does.
Also note, if you change the config file, you have to clean to get the workspace to re-read the config. I haven't been able to figure out any way to fix that. glob() does not work in the workspace.
I'm running Jenkins and I have it successfully working with my GitHub account, but I can't get it working correctly with Amazon S3.
I installed the S3 plugin and when I run a build it successfully uploads to the S3 bucket I specify, but all of the files uploaded end up in the root of the bucket. I have a bunch of folders (such as /css /js and so on), but all of the files in those folders from hithub end up in the root of my S3 account.
Is it possible to get the S3 plugin to upload and retain the folder structure?
It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:
s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME
That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).
I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.
Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:
class hudson.plugins.s3.S3Profile, method upload:
final Destination dest = new Destination(bucketName,filePath.getName());
getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);
Now if you take a look into hudson.FilePath.getName()'s JavaDoc:
Gets just the file name portion without directories.
Now, take a look into the hudson.plugins.s3.Destination's constructor:
public Destination(final String userBucketName, final String fileName) {
if (userBucketName == null || fileName == null)
throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);
final String[] bucketNameArray = userBucketName.split("/", 2);
bucketName = bucketNameArray[0];
if (bucketNameArray.length > 1) {
objectName = bucketNameArray[1] + "/" + fileName;
} else {
objectName = fileName;
}
}
The Destination class JavaDoc says:
The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.
Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.
Yes this is possible.
It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.
"Source" is the file you're uploading.
"Destination bucket" is where you place your path.
Using Jenkins 1.532.2 and S3 Publisher Plug-In 0.5, the UI configure Job screen rejects additional S3 publish entries. There would also be a significant maintenance benefit to us if the plugin recreated the workspace directory structure as we'll have many directories to create.
Set up your git plugin.
Set up your Bash script
All in your folder marked as "*" will go to bucket