Create lambda using aws cdk with 2 different language in one project - aws-cdk

I need to create lambda using 2 different language in one project cdk.
Typescript (for general purpose)
Python (for data engineering that use python library like panda, numpy, etc)
I expect if I do something like cdk deploy.. all of my lambda will be deployed using it's environment
Is it possible?
Any answer will be appreciated

Your question is a little vague so I'm not sure exactly what you are going for, but if you are just wanting to create two lambda functions where one is written in Typescript and the other is written in Python, then that is fairly straightforward. You just need to specify the runtime.
Here is some basic boilerplate for Python flavored CDK which deploys two different lambda functions.
from aws_cdk import aws_lambda as _lambda
my_typescript_lambda = _lambda.Function(
scope=self,
id="typescript_lambda",
runtime=_lambda.Runtime.NODEJS_14_X,
# Path is relative to where you execute cdk
code=_lambda.Code.from_asset(
"lambda_funcs/typescript_lambda"
),
handler="typescript_lambda.handler",
description="A lambda function written in Typescript",
)
my_python_lambda = _lambda.Function(
scope=self,
id="python_lambda",
runtime=_lambda.Runtime.PYTHON_3_9,
code=_lambda.Code.from_asset(
path="lambda_funcs/python_lambda"
),
handler="python_lambda.lambda_handler",
description="A lambda function written in python",
)

Related

Global shared variables in Jenkins Groovy pipelines

It seems like it's really difficult to be able to store a bunch of variables for use in shared code in Jenkins/Groovy scripted pipelines. I've tried a bunch of methods and none of them seem to give the desired result.
This method looked the most promising, but the values all came back as null in the calling pipeline. Get Global Variables in jenkins pipeline.
My codes is something lie
import org.blabla.JobHelper
println("env.NO_PROXY: -->${env.NO_PROXY}<--")
And in the JobHelper.groovy file, I've defined
package org.blabla.project
env.NO_PROXY = 'localhost,127.0.0.1,169.254.169.254'
the names have been changed a bit to protect the innocent, but you get the idea.
the script just prints null for the value.
Is there a simple way (or indeed any way) that I can pull in a bunch of variables from a shared library file? This feels like it should be a really simple exercise, but after spending many hours searching I'm none the wiser.
In general, env is only available once the pipeline has started, but groovy scripts are resolved much earlier.
I'm using static class members as global variables. Applied to your code sample, it would look like this:
JobHelper.groovy
package org.blabla.project
# Class must be named like the file that contains it.
class JobHelper {
static String getNO_PROXY() { 'localhost,127.0.0.1,169.254.169.254' }
}
Elsewhere:
import org.blabla.project
println("NO_PROXY: -->${JobHelper.NO_PROXY}<--")
Note that Groovy automatically generates properties from get*() and set*() methods, so we can use the short form instead of having to write JobHelper.getNO_PROXY().

Creating Dev and Prod deployments using CDK

New to AWS CDK, so please bear with me. I am trying to create multi-deployment CDK, wherein the Dev should be deployed to Account A and prod to Acocunt B.
I have created 2 stacks with the respective account numbers and so on.
mktd_dev_stack = Mktv2Stack(app, "Mktv2Stack-dev",
env=cdk.Environment(account='#####', region='us-east-1'),
stack_name = "myStack-Dev",
# For more information, see https://docs.aws.amazon.com/cdk/latest/guide/environments.html
)
the Prod is similar with the Prod account and different name. When I run them I plan on doing
cdk deploy Mktv2Stack-dev
and simiar for prod.
I am using the cdk 2.xx on Python
What my question is, does this setup give me an ability to pass a parameter, say details which is a dict object of names and criteria for resources that will be set up ? Or is there a way for me to pass parameter/dict from app.py to my program_name.py so that I can look up values from the dict and set them to resources accordingly.
Regards Tanmay
TL;DR Use a single stack and pass in the stg/prod as an env var to app.py.
Pass config down from app.py > Stacks > Constructs as Python Parameters (constructor args). Avoid using CDK Parameters* for config, says AWS's CDK Application Best Practices.
Practically speaking, you pass the account or alias as a environment variable, which app.py reads to perform the metadata lookups and set the stack props. Here's a node-flavoured version of this pattern:
AWS_ACCOUNT=123456789012 npx cdk deploy '*' -a 'node ./bin/app' --profile test-account"
Why not 2 stacks in app.py, one for PROD and one for STAGING?
A 2-stack approach can certainly work. The downsides are that you rarely want to deploy both environments at the same time (outside a CI/CD context). And cross-account permissions are trickier to handle safely if mixed in a single cdk deploy.
Customising Constructs for different environments
Within your code, use a dict, class or whatever to return the configuration you want based on an account or region input. Finally, pass the variables to the constructs. Here's an example of code that uses account, region and isProduction props to customise a s3 bucket:
const queriesBucket = new s3.Bucket(this, 'QueriesBucket', {
bucketName: `${props.appName.toLowerCase()}-queries-${props.env.account}-${
props.env.region
}`,
removalPolicy: props.isProduction
? cdk.RemovalPolicy.RETAIN
: cdk.RemovalPolicy.DESTROY,
versioned: props.isProduction,
lifecycleRules: [
{
id: 'metadata-rule',
prefix: 'metadata',
noncurrentVersionExpiration: props.isProduction
? cdk.Duration.days(30)
: cdk.Duration.days(14),
},
],
});
* "Parameter" has different meaning in Python and CDK. Passing variables between constructs in code using Python Parameters (=method arguments) is a best practice. In CDK-speak a Parameter has the special meaning of a variable value passed to CloudFormation at deploy time. These are not CDK best practice.

Requiring workflow class in workflow starter code

I have a simple question regarding the architecture of my Amazon Simple Workflow / AWS Flow for Ruby app. For background, I have a simple workflow with one activity running in an AWS Flow for Ruby layer on Opsworks. I have a separate REST API running in a Rails App Server layer on Opsworks that I would like to kick off the workflow.
The code in the REST API that kicks off the workflow:
1: domain = AWS::SimpleWorkflow.new.domains['my_domain']
2: workflow_client = AWS::Flow::workflow_client(domain.client, domain) {{from_class: MyWorkflowClass}}
3: workflow_client.start_execution(input_1: #input1, input_2: #input2)
My assumption is that my workflow and REST API code bases could be separate and that the only common component would be the aws-flow Ruby gem and require 'aws/decider'. However, I'm finding that my REST API also needs to have require 'PATH_TO_MY_WORKFLOW_CLASS'. When I remove that line of code from the code file in my REST API that kicks off the workflow, I get the following error:
undefined method `_options' for nil:NilClass; ["/Users/MyName/.rvm/gems/ruby-2.0.0-p247/gems/aws-flow-2.2.1/lib/aws/decider/utilities.rb:183:in `interpret_block_for_options'", "/Users/MyName/.rvm/gems/ruby-2.0.0-p247/gems/aws-flow-2.2.1/lib/aws/decider/implementation.rb:73:in `workflow_client'"
(error at line 2 above)
Am I mistaken? Do I really need to require MyWorkflowClass in my workflow starter app (i.e. my REST API) or am I doing something wrong? I've scoured the documentation and could not find a clear answer to this. All the samples that I can find do indeed have the workflow class included in the workflow starter code, but I'm not sure if it's because they are bundled as a simple sample or if it's because it's the way it's supposed to be. The reason why I am not taking the samples at face value is because requiring the workflow class in the workflow starter code does not make any sense to me. It binds the two apps way too tightly for my taste.
I posted an issue on the aws-flow-ruby sdk and got the answer from an Amazon Engineer. In short, you can use the :from_class option or the :prefix_name and :execution_method options together.
There are two ways of starting the workflow in code
1) Using the aws sdk directly.
In this case, your code doesn't need to know anything about the workflow class. You just need the domain, workflow type (name and version) and the workflow id.
It will look something like -
require 'aws-sdk-v1'
swf = AWS::SimpleWorkflow.new.client
swf.start_workflow_execution(
domain: "HelloWorld",
workflow_type: {
name: "HelloWorldWorkflow",
version: "1.0"
},
workflow_id: "foo",
input: ....,
....other options (optional)...
)
As you can see above, this doesn't require the workflow class at all.
2) Using the aws-flow gem (which is what you are doing above).
There are two ways of using the workflow client provided by the aws-flow gem to start an execution. You can either use the client as a generic client and not tie it to any workflow class or you can use the :from_class option to fetch options from a particular workflow class. To use the from_class option, you need to have the class in the ObjectSpace (hence you need to require the workflow file).
With from_class -
require 'aws/decider'
domain = AWS::SimpleWorkflow.new.domains['my_domain']
workflow_client = AWS::Flow::workflow_client(domain.client, domain) {{from_class: "MyWorkflowClass"}}
workflow_client.start_execution(input_1: #input1, input_2: #input2)
Without from_class -
require 'aws/decider'
domain = AWS::SimpleWorkflow.new.domains['my_domain']
workflow_client = AWS::Flow::workflow_client(domain.client, domain) {{
prefix_name: "YourClassName",
execution_method: "workflow_method_name",
version: "1.0",
...other options...
}}
workflow_client.start_execution(input_1: #input1, input_2: #input2)
The recommended way to start a workflow execution is to use aws-flow WorkflowClient instead of using the SDK directly.
Additional notes with respect to the input accepted by a workflow:
The SDK and the console will only take strings as input. This can be a free form string but if your workflow is written using ruby flow, this string should be a serialized form of your input so that the WorkflowWorker can deserialize the input when it picks up the task and convert it into ruby objects (in this case a hash).
When you use the ruby flow WorkflowClient, the client will automatically serialize your input hash (or any other input) into a string before sending it to SWF. aws-flow by default uses a YAML based data converter to do this (It can be overridden).
If you just want to see what your input hash will look like as a string, you can do the following -
AWS::Flow::FlowConstants.default_data_converter.dump(input_hash)
You can then use this serialized input to start a workflow using the SDK or the console.

working with xtext not as plugin

In general I have my dsl as plugin and I want to create a new app that use my dsl
so i tried to write this code:
JsonParser p = new JsonParser();
IParseResult r = p.parse(new StringReader("{}"));
//once that work it will be the file data instead of {}
but when i do the parse the node model builder is null and the following line has exception:
return doParse(ruleName, in, nodeModelBuilder.get(), 0);
and i'm not sure how to init nodeModelBuilder
i'm sure i missing some steps but i'm not quite familiar with the xtext process.
thanks!
You already read the following answer on Eclipse Forum. You need to create an IParser instance by injecting it. All dependencies gets also injected. The necessary bindings are described in your JsonRuntimeModule. Xtext uses Guice and theses Modules to glue everything together. This pattern is called Dependency Injection.
... I want to create a new app that use my dsl
So you want to use your Json DSL in standalone mode.
My suggestion:
Create a minimum Eclipse IApplication with CLI that reads and parses an input file. The advantage of an Eclipse IApplication is that you can easily deploy an headless version of your DSL runtime. [1]
Have a look at your JsonInjectorProvider and the ParseHelper [2] from Xtext's JUnit support for examples how to use your DSL and Xtext runtime in standalone mode.
[1] http://www.eclipsezone.com/eclipse/forums/t99762.html
[2] org.eclipse.xtext.junit.util.ParseHelper
You are not supposed to call parser directly. See:
http://wiki.eclipse.org/Xtext/FAQ#How_do_I_load_my_model_in_a_standalone_Java_application.C2.A0.3F
The code should look like:
Injector injector = new MyDslStandaloneSetup().createInjectorAndDoEMFRegistration();
XtextResourceSet resourceSet = injector.getInstance(XtextResourceSet.class);
resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);
Resource resource = resourceSet.getResource(new File("/../../some.json").toURI(), true);
Model modelRootElement = (Model) resource.getContents().get(0);
Replace MyDsl with 'JsonParser' or 'Json' or whatever is your DSL name. Look for class JsonStandaloneSetup or JsonParserStandaloneSetup in your DSL source code. This class is generated when you start the Xtext project (or when you run workflow for the first time, not sure now). Replace Model with whatever is your root element type. It must be EObject subclass.
The parsing/validation/buidling AST is done resource.getContents() command. Not very intuitive, I know. It is because you have to initialize context, all sorts of contexts i fact, Guice context, EMF context, and perhaps other, all encapsulated in the StandaloneSetup (and RuntimeModule). The context is similar to Spring Application Context.
You need to use StandaloneSetup to run in standalone mode.
See this tutorial for help

statically analysing Lua code for potential errors

I'm using a closed-source application that loads Lua scripts and allows some customization through modifying these scripts. Unfortunately that application is not very good at generating useful log output (all I get is 'script failed') if something goes wrong in one of the Lua scripts.
I realize that dynamic languages are pretty much resistant to static code analysis in the way C++ code can be analyzed for example.
I was hoping though, there would be a tool that runs through a Lua script and e.g. warns about variables that have not been defined in the context of a particular script.
Essentially what I'm looking for is a tool that for a script:
local a
print b
would output:
warning: script.lua(1): local 'a' is not used'
warning: script.lua(2): 'b' may not be defined'
It can only really be warnings for most things but that would still be useful! Does such a tool exist? Or maybe a Lua IDE with a feature like that build in?
Thanks, Chris
Automated static code analysis for Lua is not an easy task in general. However, for a limited set of practical problems it is quite doable.
Quick googling for "lua lint" yields these two tools: lua-checker and Lua lint.
You may want to roll your own tool for your specific needs however.
Metalua is one of the most powerful tools for static Lua code analysis. For example, please see metalint, the tool for global variable usage analysis.
Please do not hesitate to post your question on Metalua mailing list. People there are usually very helpful.
There is also lua-inspect, which is based on metalua that was already mentioned. I've integrated it into ZeroBrane Studio IDE, which generates an output very similar to what you'd expect. See this SO answer for details: https://stackoverflow.com/a/11789348/1442917.
For checking globals, see this lua-l posting. Checking locals is harder.
You need to find a parser for lua (should be available as open source) and use it to parse the script into a proper AST tree. Use that tree and a simple variable visibility tracker to find out when a variable is or isn't defined.
Usually the scoping rules are simple:
start with the top AST node and an empty scope
item look at the child statements for that node. Every variable declaration should be added in the current scope.
if a new scope is starting (for example via a { operator) create a new variable scope inheriting the variables in the current scope).
when a scope is ending (for example via } ) remove the current child variable scope and return to the parent.
Iterate carefully.
This will provide you with what variables are visible where inside the AST. You can use this information and if you also inspect the expressions AST nodes (read/write of variables) you can find out your information.
I just started using luacheck and it is excellent!
The first release was from 2015.

Resources