I'm in the process of migrating entire CloudFormation stacks to Troposphere, including Lambda and Lambda-reliant CFN Custom Resources.
One of my goals is to circumvent the creation of template files altogether, making the Python code the sole "source of truth" (i.e without template files that are created and therefore can be edited, causing config drift).
This requires the ability to:
Pass a file-like object to the SAM builder (instead of a file-name)
Calling the AWS SAM builder from Python and not the CLI
My first naive idea was that I would be able to import a few modules from aws-sam-cli put a wrapper for io.StringIO around it (to hold the template as file-like object) and presto! Then I looked at the source code for sam build and all hope left me:
I may not be able to use Docker/containers for building, as I it will map the build environment, including template files.
AWS SAM CLI is not designed to have a purely callable set of library functions, similar to boto3. Close, but not quite.
Here is the core of the Python source
with BuildContext(template,
base_dir,
build_dir,
clean=clean,
manifest_path=manifest_path,
use_container=use_container,
parameter_overrides=parameter_overrides,
docker_network=docker_network,
skip_pull_image=skip_pull_image,
mode=mode) as ctx:
builder = ApplicationBuilder(ctx.function_provider,
ctx.build_dir,
ctx.base_dir,
manifest_path_override=ctx.manifest_path_override,
container_manager=ctx.container_manager,
mode=ctx.mode
)
try:
artifacts = builder.build()
modified_template = builder.update_template(ctx.template_dict,
ctx.original_template_path,
artifacts)
move_template(ctx.original_template_path,
ctx.output_template_path,
modified_template)
click.secho("\nBuild Succeeded", fg="green")
msg = gen_success_msg(os.path.relpath(ctx.build_dir),
os.path.relpath(ctx.output_template_path),
os.path.abspath(ctx.build_dir) == os.path.abspath(DEFAULT_BUILD_DIR))
click.secho(msg, fg="yellow")
This relies on a number of imports from a aws-sam-cli internal library with the build focused ones being
from samcli.commands.build.build_context import BuildContext
from samcli.lib.build.app_builder import ApplicationBuilder, BuildError, UnsupportedBuilderLibraryVersionError, ContainerBuildNotSupported
from samcli.lib.build.workflow_config import UnsupportedRuntimeException
It's clear that this means it's not as simple as creating something like a boto3 client and away I go! It looks more like I'd have to fork the whole thing and throw out nearly everything to be left with the build command, context and environment.
Interestingly enough, sam package and sam deploy, according to the docs, are merely aliases for aws cloudformation package and aws cloudformation deploy, meaning those can be used in boto3!
Has somebody possibly already solved this issue? I've googled and searched here, but haven't found anything.
I use PyCharm and the AWS Toolkit which if great for development and debugging and from there I can run SAM builds, but it's "hidden" in the PyCharm plugins - which are written in Kotlin!
My current work-around is to create the CFN templates as temp files and pass them to the CLI commands which are called from Python - an approach I've always disliked.
I may put in a feature request with the aws-sam-cli team and see what they say, unless one of them reads this.
I've managed to launch sam local start-api from a python3 script.
Firstly, pip3 install aws-sam-cli
Then the individual command can be imported and run.
import sys
from samcli.commands.local.start_api.cli import cli
sys.exit(cli())
... provided there's a template.yaml in the current directory.
What I haven't (yet) managed to do is influence the command-line arguments that cli() would receive, so that I could tell it which -t template to use.
Edit
Looking at the way aws-sam-cli integration tests work it seems that they actually kick off a process to run the CLI. So they don't actually pass a parameter to the cli() call at all :-(
For example:
class TestSamPython36HelloWorldIntegration(InvokeIntegBase):
template = Path("template.yml")
def test_invoke_returncode_is_zero(self):
command_list = self.get_command_list(
"HelloWorldServerlessFunction", template_path=self.template_path, event_path=self.event_path
)
process = Popen(command_list, stdout=PIPE)
return_code = process.wait()
self.assertEquals(return_code, 0)
.... etc
from https://github.com/awslabs/aws-sam-cli/blob/a83aa9e620ff679ca740496a3f1ff4872b88894a/tests/integration/local/invoke/test_integrations_cli.py
See also start_api_integ_base.py in the same repo.
I think on the whole this is to be expected because the whole thing is implemented in terms of the click command-line application framework. Unfortunately.
See for example http://click.palletsprojects.com/en/7.x/testing/ which says "The CliRunner.invoke() method runs the command line script in isolation ..." -- my emphasis.
I am using following python script to run sam cli commands. This should work for you too.
import json
import sys
import os
try:
LAMBDA_S3_BUCKET="s3-bucket-name-in-same-region"
AWS_REGION="us-east-1"
API_NAME = "YourAPIName"
BASE_PATH="/path/to/your/project/code/dir"
STACK_NAME="YourCloudFormationStackName"
BUILD_DIR="%s/%s" % (BASE_PATH, "build_artifact")
if not os.path.exists(BUILD_DIR):
os.mkdir(BUILD_DIR)
os.system("cd %s && sam build --template template.yaml --build-dir %s" % (BASE_PATH, BUILD_DIR))
os.system("cd %s && sam package --template-file %s/template.yaml --output-template-file packaged.yaml --s3-bucket %s" %(BASE_PATH, BUILD_DIR, LAMBDA_S3_BUCKET))
os.system("cd %s && sam deploy --template-file packaged.yaml --stack-name %s --capabilities CAPABILITY_IAM --region %s" %(BASE_PATH, STACK_NAME, AWS_REGION))
except Exception as e:
print(e.message)
exit(1)
Related
So I'm a cdk and typescipt beginner.
After successfully deploying a couple of stacks I'm not getting the following error with cdk synth: Unexpected token export. Subprocess exited with error 1.
I'm less interested in solving this issue and more interested to where the stack trace is, or any kind of additional info about the error. Doing a --trace or -v does not really provide much helpful info.
Any ideas how I can obtain such information????
What happened behind the scene is that CDK converts to stack into a cloudformation template and saves it into S3 - The S3 bucket created when running cdk bootstrap(More info here).
When you run cdk synth, CDK trying to convert the code ( in your case typescript) into cloudformation stack. this error: Unexpected token export. could be since async call that didn't end, in addition, this error means that your code could not be transferred into cloudformation stack, but it doesn't mean your "cdk" code is broke.
When you run cdk deploy cdk compare the transferred template with the S3 template. And deploy only the diffs.
Update:
Yesterday DevopsStart publish new article about debuuging cdk in vs code.
This might be helpful.
CDK Debugging in VSCode.
I believe the issue is because npx is used to run ts-node under the hood and npx appears to swallow the stack as described here.
One work around is add a try/catch block, e.g.
try {
main()
} catch (e) {
console.log(e)
throw e
}
So I think this is being caused by some javascript (yours or possibly in an imported module) that is using ESM export syntax.
This confused me a little at first because I'm using import/export syntax all over my project however the project is written in typescript which means that it's being compiled to javascript before execution and most likely all of these ESM statements are being emitted in CJS syntax. I however was also using lodash-es because I preferred the import syntax. This module internally uses ESM syntax in javascript and as such this is interpreted directly by the node runtime. Modern versions of node can use ESM syntax however only under specific package configs which mine did not have. My solution was just to remove the offending package however you may be able to go the other direction and configure your package.json such that ESM is enabled in node. This may also require your tsconfig.json to be updated such that the typescript compiler also emits ESM.
https://nodejs.org/api/esm.html
Passing --debug flag helped me.
cdk synth --debug
I would like to utilize pipenv as my virtual environment manager and for my dependency management for my Python cdk projects, upon running 'cdk init'. I read that you can specify a 'custom' application template but could not find documentation on creating one. Is it possible and can the virtual environment/dependency manager be controlled using this feature?
I would like to be able to run 'cdk init hello-world --language python' and have the scaffolding for the project be generated BUT using pipenv.
It's not possible to do that without modifying the source code for the CDK package itself. You likely won't want to manage your own divergent version of the standard package.
I've shoe-horned CDK to work with PipEnv a couple of times, and it's more work than it's worth at this point. The problem is that PipEnv forces the . delimiter in the package name to a -; pipenv install aws-cdk.aws-rds is listed as aws-cdk-aws-rds in the Pipfile, and the package installations don't actually work.
There's an open issue on the repo for this though (https://github.com/aws/aws-cdk/issues/3671), so you could +1 there in hopes that they can address it. It really is an issue with Pipenv though.
Following the link from Scott for the open issue, it looks like this works now, provided the package name is in quotes.
I am trying to migrate a huge project having visual studio and maven projects to bazel. I need to access our in house maven server which is encrypted. To get access I need the load the maven_jar skylark extension since the default impl does not support encryption (get error 401). using the extension leads to a lot of troubles, like:
ERROR: BUILD:4:1: no such package '#org_bouncycastle_bcpkix_jdk15on//jar': Traceback (most recent call last):
File ".../external/bazel_tools/tools/build_defs/repo/maven_rules.bzl", line 280
_maven_artifact_impl(ctx, "jar", _maven_jar_build_file_te...)
File ".../external/bazel_tools/tools/build_defs/repo/maven_rules.bzl", line 248, in _maven_artifact_impl
fail(("%s: Failed to create dirs in e...))
org_bouncycastle_bcpkix_jdk15on: Failed to create dirs in execution root.
The main issue seems to be the shell that needs to be provided to bazel in BAZEL_SH environment variables:
I am working under windows
I am using bazel 0.23.2
bazel seems to run a bash command using "bash" directly and not the one provided by env variable.
I got a ubuntu shell installed in windows. bazel was using everything from ubuntu, especially when using maven (settings.xml was using from ubuntu ~/.m2 and not from windows user)
after uninstalling ubuntu and making sure that bash in a cmd ends up in "command not found" I also removed the BAZEL_SH env var and bazel throws the message above
after setting the BAZEL_SH variable again it fails with the same error message
I am assuming that bazel gets a bash from somewhere or is ignoring the env variable. My questions are:
1. How to setup a correct shell?
2. Is BAZEL_SH needed when using current version?
For me the doc at bazel website about setup is outdated.
Cheers
Please consider using rules_jvm_external to manage your Maven dependencies. It supports both Windows and private repositories using HTTP Basic Authentication.
For me the doc at bazel website about setup is outdated.
The Bazel team is aware of this and will be updating our docs shortly.
I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.
I'm attempting to compile libhdfs (a native shared library that allows external apps to interface with hdfs). It's one of the few steps I have to take to mount Hadoop's hdfs using Fuse.
The compilation seems to go well for a while but finishes with "BUILD FAILED" and the following problems summary -
commons-logging#commons-logging;1.0.4: configuration not found in commons-logging#commons-logging;1.0.4: 'master'. It was required from org.apache.hadoop#Hadoop;working#btsotbal800 commons-logging
log4j#log4j;1.2.15: configuration not found in log4j#log4j;1.2.15: 'master'. It was required from org.apache.hadoop#Hadoop;working#btsotbal800 log4j
Now, I have a couple questions about this, in that the book which I'm using to do this doesn't go into any details about what these things really are.
Are commons-logging and log4j libraries which Hadoop uses?
These libraries seem to live in $HADOOP_HOME/lib. They are jar files though. Should I extract them, try to change some configurations, and then repack them back into a jar?
What does 'master' in the errors above mean? Are there different versions of the libraries?
Thank you in advance for ANY insight you can provide.
If you are using cloudera hadoop(cdh3u2), you dont need to build the fuse project.
you can find the binary(libhdfs.so*) inside the directory $HADOOP_HOME/c++/lib
Before fuse mount update the "$HADOOP_HOME/contrib/fuse-dfs/src/fuse_dfs_wrapper.sh" as follows
HADOOP_HOME/contrib/fuse-dfs/src/fuse_dfs_wrapper.sh
#!/bin/bash
for f in ${HADOOP_HOME}/hadoop*.jar ; do
export CLASSPATH=$CLASSPATH:$f
done
for f in ${HADOOP_HOME}/lib/*.jar ; do
export CLASSPATH=$CLASSPATH:$f
done
export PATH=$HADOOP_HOME/contrib/fuse-dfs:$PATH
export LD_LIBRARY_PATH=$HADOOP_HOME/c++/lib:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/
fuse_dfs $#
LD_LIBRARY_PATH contains the list of directories here
"$HADOOP_HOME/c++/lib" contains libhdfs.so and
"/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/" contains libjvm.so
\# modify /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/ as your java_home
Use the following command for mounting hdfs
fuse_dfs_wrapper.sh dfs://localhost:9000/ /home/510600/mount1
for unmounting use the following command
fusermount -u /home/510600/mount1
I tested fuse only in hadoop pseudo mode not in cluster mode