Does aws-cdk support uploading external dependencies for lambda functions? - aws-cdk

I want to use an external dependency (wink-pos-tagger) in my lambda and be able to deploy it with aws-cdk. I know there is a manual way of doing it but I would rather it all be in one command with the aws-cdk. Does aws-cdk support this?

Not at the moment, but it seems like it's on the CDK's roadmap

You can use Lambda Layers - https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-lambda.LayerVersion.html to build custom packages with external dependencies that can be used in your Lambda functions.
Or you can use webpack and build your functions before cdk deploy.

Related

Build Beam pipelines using Bazel (with DataflowRunner)

I use Bazel to build my Beam pipeline. The pipeline works well using the DirectRunner, however, I have some trouble managing dependencies when I use DataflowRunner, Python can not find local dependencies (e.g. generated by py_library) in DataflowRunner. Is there any way to hint Dataflow to use the python binary (py_binray zip file) in the worker container to resolve the issue?
Thanks,
Please see here for more details on setting up dependencies for Python SDK on Dataflow. If you are using a local dependency, you should probably look into developing a Python package and using the extra_package option or developing a custom container.

Jenkins Pipeline - Call functions in shared jar

So here is my project setup
A separate groovy project
Multiple pipelines
All the pipeline scripts refer to the shared groovy project. I went through the shared libraries and all of the needs to be registered in Jenkins global configuration.
Is there any way to do without it? I tried using Grab but ended up with the error
java.lang.RuntimeException: No suitable ClassLoader found for grab
Firstly for Grab to work your Jenkins needs to have access to the internet.
Shared Libraries are definitely the way to go here.
Like many things the secret sauce is in the syntax.

Working with versions on Jenkins Pipeline Shared Libraries

I'm trying to figure it out on how to work with a specific version of a Shared Library.
Jenkins documentation about this isn't quite clear so I've being making some experimenting but with no success.
They basically say:
But how should I configure somelib on 'Global Pipeline Libraries' section under Manage Jenkins > System Config menu so I can use any of the available stable versions?!
The thing is:
Imagine that I've my somelib Project under version control and, currently, I've released 2 stable versions of it: v0.1 and v0.2 (so I have 2 tags named v0.1 and v0.2).
And in some Pipeline I want to use somelib's version v0.1 and on another Pipeline I need to use v0.2 version.
How can I do this using the #Library annotation provided by Jenkins?
In the Global Pipeline Libraries under Jenkins > System Config you only set the default library version to use if not specified otherwise inside the Jenkinsfile. This might look like this (ignore the Failed to connect to repo error here):
Inside the Jenkinsfile you can explicitly specify which version you want to use if you do not want the default:
#Library('somelib#<tag/branch/commitRef>')
That way you can freely choose at any time which pipeline version to use for you project.
Following #fishi response I just want to leave an important note.
During library configuration on Global Pipeline Libraries you must select Modern SCM option so things can work seamlessly.
If you select Legacy Mode instead you'll not be able to use the library as desired.
If for some reason Modern SCM does not appear in the Retrieval Mode option it means that you need to upgrade Global Pipeline Libraries plugin or even Jenkins
Basically "Version" is the branch name for the repo which stores the shared library codes. If you don't have any branch other than main or master, make sure to fill it in Default Version in your Global Pipeline Library configuration

Can I use third party libraries with Cloud Dataflow?

Does Cloud Dataflow allows you to use it with third party library jar files? How about non-Java libraries?
Kaz
Yes you can use third party library files just fine. By default when you run your Dataflow main program to submit your job, Dataflow will analyze your classpath and upload any jars it sees and add them to the class path of the workers.
If you need more controlthen you can use the command line option --filesToStage to specify additional files to stage on the workers.
Another common technique is building a single bundled jar which contains all your dependencies. One way to build a bundled jar is to use a maven plugin like shade.

How do you integrate ivy with MSbuild

What approach has worked well for you combining IVY + msbuild?
Our goal is to integrate IVY into the C#/C++ build process for dependency resolution and publishing. We have tried adding it to custom tasks at the beginning and end of the build and we have tried wrapping the msbuild calls with ant+ apache-ant-dotnet.
Other options might be gradle, buildr, rake.
What do you use?
Thanks
Peter
Most build technologies can use libraries found in a local directory. I'd suggest using the command-line ivy program to populate this, at the start of your build:
java -jar ivy.jar -ivy ivy.xml -settings ivysettings.xml -retrieve "lib/[conf]/[artifact].[ext]"
Your dependencies are listed in a standard ivy file called ivy.xml. The protocol, location and layout of your remote repository is described in ivysettings.xml
The advantage of this approach (as opposed to switching to Gradle, etc) is that you're not trying to replace your existing build tool. Ivy is solely concerned with managing dependencies.
My team has been using Ivy for .NET for a couple of years very successfully. I know several more that give it a vote of confidence.
Use it standalone. Wrap calls into msbuild tasks. No need to use Ant integration.

Resources