How to both run an AWS SAM template locally and deploy it to AWS successfully (with Java Lambdas)? - aws-sam-cli

I'm trying to build an AWS application using SAM (Serverless Application Model) with the Lambdas written in Java.
I was able to get it running locally by using a resource definition like this in the template:
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: HelloWorldFunction
Handler: helloworld.App::handleRequest
Runtime: java8
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
But to get the sam package phase to upload only the actual code (and not the whole project directory) to S3 I had to change it to this:
...
Properties:
CodeUri: HelloWorldFunction/target/HelloWorld-1.0.jar
...
as documented in the AWS SAM example project README.
However, this breaks the ability to run the application locally with sam build followed by sam local start-api.
I tried to get around this by giving the CodeUri value as a parameter (with --parameter-overrides) and this works locally but breaks the packaging phase because of a known issue with the SAM translator.
Is there a way to make both the local build and the real AWS deployment working, preferably with the same template file?

The only workaround I've come up myself with so far is to use different template files for local development and actual packaging and deployment.
To avoid maintaining two almost equal template files I wrote a script for running the service locally:
#!/bin/bash
echo "Copying template..."
sed 's/CodeUri: .*/CodeUri: HelloWorldFunction/' template.yaml > template-local.yaml
echo "Building..."
if sam build -t template-local.yaml
then
echo "Serving local API..."
sam local start-api
else
echo "Build failed, not running service."
fi
This feels less than optimal but does the trick. Would love to hear better alternatives, still.
Another idea that came to mind was extending a mutual base template with separate CodeUri values for these cases but I don't think SAM templates support anything like that.

Related

Bitbucket auto deploy to Linux server (DigitalOcean Droplet)

I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.
To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format

Download spark-submit without all the Spark framework to make lite Docker image

Most of the Docker image that embed Apache Spark have the whole spark archive in it.
Also most of the time, we submit the spark application on kubernetes, hence the spark job is running on other Docker container.
As such, I am wondering, in order to make the Docker image smaller, how to embed spark-submit feature?
That's a great question! I had a look (for the latest one on the Downloads page: 3.3.1) and found the following:
Looking at the contents of $SPARK_HOME/bin/spark-submit, you can see the following line:
exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$#"
Ok, so it looks like the $SPARK_HOME/bin/spark-submit script calls the $SPARK_HOME/bin/spark-class script. Let's have a look at that one.
Similar to spark-submit, spark-class calls the load-spark-env.sh script like so:
. "${SPARK_HOME}"/bin/load-spark-env.sh
This load-spark-env.sh script calls other scripts of its own as well. But there is also a bit about Spark jars in spark-class:
# Find Spark jars.
if [ -d "${SPARK_HOME}/jars" ]; then
SPARK_JARS_DIR="${SPARK_HOME}/jars"
else
SPARK_JARS_DIR="${SPARK_HOME}/assembly/target/scala-$SPARK_SCALA_VERSION/jars"
fi
if [ ! -d "$SPARK_JARS_DIR" ] && [ -z "$SPARK_TESTING$SPARK_SQL_TESTING" ]; then
echo "Failed to find Spark jars directory ($SPARK_JARS_DIR)." 1>&2
echo "You need to build Spark with the target \"package\" before running this program." 1>&2
exit 1
else
LAUNCH_CLASSPATH="$SPARK_JARS_DIR/*"
fi
# Add the launcher build dir to the classpath if requested.
if [ -n "$SPARK_PREPEND_CLASSES" ]; then
LAUNCH_CLASSPATH="${SPARK_HOME}/launcher/target/scala-$SPARK_SCALA_VERSION/classes:$LAUNCH_CLASSPATH"
fi
So as you can see, it is referencing the spark jars directory (288MB of the total 324MB for Spark 3.3.1) and putting that on the launch classpath. Now, it's very possible that not all of those jars are needed in the case of submitting an application on kubernetes. But at least you need some kind of library to translate a spark application to some kubernetes configuration that your Kubernetes API server can understand.
So my conclusion from this bit is:
We can quite easily follow which files are exactly needed. In first instance, I would say anything in $SPARK_HOME/bin and $SPARK_HOME/conf. But that is no issue since they are all very small scripts/conf files.
Some of those scripts though, are putting the jars directory in your classpath for the final java command.
Maybe they don't need all the jars, but they will need some kind of library to connect to the Kubernetes API server. So I would expect some jar to be needed there. I see there is a jar called kubernetes-model-core-5.12.2.jar. Maybe this one?
Since most of the size of this $SPARK_HOME folder comes from those jars, you can try to delete some jars and run your spark-submit jobs to see what happens. I would think that, amongst others, jars like commons-math3-3.6.1.jar or spark-mllib_2.12-3.3.1.jar would not be necessary for a simple spark-submit to a Kubernetes API Server.
(All those specific jars just come from that one Spark version I talked about in the start of this post)
Really interesting question, I hope this helps you a bit! Just try deleting some jars, run your spark-submit and see what happens!

circleci config.yml: 'Invalid template path template.yml'

I am working on a CI/CD project(Using circleci pipeline) and currently, I am stuck on getting my "create_infrastructure" job to work. Below is the job
# AWS infrastructure
create_infrastructure:
docker:
- image: amazon/aws-cli
steps:
- checkout
- run:
name: Ensure backend infrastructure exist
command: |
aws cloudformation deploy \
--template-file template.yml \
--stack-name my-stack
When I run the job above, it returns Invalid template path template.yml
Where am I suppose to keep the template.yml file?
I placed it in the same location as the config.yml in the project's GitHub repository(Is this right?)
Could the problem on the line --template-file template.yml in my job? (I am a beginner here).
Please I need help.
I actually misspelled the name of the template found in my GitHub repository. Everything worked well after I correct it.
But I think this error was not explicit at all, I was expecting something like 'template not found in the path specified', instead of 'Invalid template path template.yml'

How to configure PhpStorm, Codeception and Docker to reliably get code coverage

I can not figure out how to reliably configure the parts of my project to get code coverage displayed in PhpStorm.
I am using PhpStorm (EAP), docker (19.03.5-rc1) and docker-compose (1.24.1). I set up my project with a docker-compose.yml that includes a php service (Docker image in2code/php-dev:7.3-fpm which includes xdebug and is based on the official php:7.3-fpm image)
I created a new project with composer and required codeception (3.1.2). I ran the codecption bootstrap, added the coverage settings, created a unit test and ran the while tests suite with coverage. The coverage does not appear in PhpStorm or it does show 0% everywhere. I can not figure out how to configure PhpStorm/Codeception to show the coverage. There are Projects where this works but they are configured to use a Docker image instead of a running docker-compose container.
I tried following remote PHP interpreters:
Remote PHP Interpreter -> Docker -> Image (in2code/php-dev:7.3-fpm)
Remote PHP Interpreter -> Docker -> Image built by docker-compose for this project (cct_php:latest)
Remote PHP Interpreter -> Docker Compose -> service php -> docker-compose exec
Remote PHP Interpreter -> Docker Compose -> service php -> docker-compose run
I created a PHP Test Framework for each interpreter i created above.
I created a Codeception run confgiguration for each Test Framework configuration.
I executed all Codeception run configurations with any combination of (Project Default) PHP CLI Interpreter and other remote interpreters.
The Testing Framework is configured with the correct path to codeception (codeception version is detected by phpstorm) and it holds the path to the codeception.yml file as default configuration file. All run configurations are using the default configuration file from the test framework configuration.
I also tried to enable coverage in the root codeception.yml file, tried work_dir: /app and remote: false.
None of these attempts generated a code coverage that was displayed in PhpStorm.
Projects where code coverage works are configured with PHP Remote Interpreter Docker Image (docker-compose built image for that project)
Edit: The CLI Interpreter for the project must be the image built by docker-compose build. Setting different Command Line interpreters in the Codeception run configuration does not have any effects
docker-compose.yml
version: '3.7'
services:
php:
image: in2code/php-dev:7.3-fpm
volumes:
- ./:/app/
- $HOME/.composer/auth.json:/tmp/composer/auth.json
- $HOME/.composer/cache/:/tmp/composer/cache/
tests/unit.suite.yml
actor: UnitTester
modules:
enabled:
- Asserts
- \App\Tests\Helper\Unit
step_decorators: ~
coverage:
enable: true
remote: true
include:
- src/*
tests/unit/App/Controller/AirplaneControllerTest.php
<?php
declare(strict_types=1);
namespace App\Tests\App\Controller;
use App\Controller\AirplaneController;
class AirplaneControllerTest extends \Codeception\Test\Unit
{
/**
* #covers \App\Controller\AirplaneController::start
*/
public function testSomeFeature()
{
$airplaneController = new AirplaneController();
$airplaneController->start();
}
}
Did i miss something in my configuration?
The best solution would be a valid configuration using docker-compose exec for the remote interpreter, so other services like mysql or ldap are available for functional tests.
Unfortunately, it's hopelessly broken at the moment: https://youtrack.jetbrains.com/issue/WI-32625
I've noticed that PHPStorm calls codeception with this option
--coverage-xml /opt/phpstorm-coverage/admin_service$unit_tests.xml
but when testing is done I get this message
XML report generated in /opt/phpstorm-coverage/admin_service$$unit_tests.xml
Notice the filename is different. So I've created a link using this command
ln admin_service\$\$unit_tests.xml admin_service\$unit_tests.xml
and restarted the test coverage. The coverage window showed up.

Jenkins X use secrets in Preview environments

I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart

Resources