Serverless error - No file matches include / exclude patterns - serverless

I am trying some skeleton deployment using python. Here is my serverless.yaml
My folder structure is
serverless-test
|_lambdas
|____handler.py
|_layers
|____common
|_________somefunction.py
service: serverless-test
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
stage: test
region: us-west-2
functions:
hello:
handler: lambdas/handler.hello
This works fine. Now as soon as I add a layer, I get the following error
No file matches include / exclude patterns
service: serverless-test
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
stage: test
region: us-west-2
functions:
hello:
handler: lambdas/handler.hello
layers:
- {Ref: CommonLambdaLayer}
layers:
common:
path: layers/common
name: common-module
description: common set of functions
I also tried adding include and exclude patterns. But it didn't solve my problem
service: serverless-test
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
stage: test
region: us-west-2
package:
individually: true
exclude:
- ./**
include:
- ./lambdas/**
functions:
hello:
handler: lambdas/handler.hello
layers:
- {Ref: CommonLambdaLayer}
layers:
common:
path: layers/common
name: common-module
description: common set of functions
package:
include:
- ./**
I also tried being very specific
service: serverless-test
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
stage: test
region: us-west-2
package:
individually: true
exclude:
- ./**
functions:
hello:
handler: lambdas/handler.hello
layers:
- {Ref: CommonLambdaLayer}
package:
exclude:
- ./**
include:
- ./lambdas/handler.py
layers:
common:
path: layers/common
name: common-module
description: common set of functions
package:
exclude:
- ./**
include:
- ./layers/common/somefunction.py

I had the same issue and found this answer here:
serverless is checking those files against the patterns specified in the root package:exclude and because./** matches every file and the include-pattern./functions/**/* matches none, no files are actually included in the layer, which causes the error.
Just try removing the ./** from the excludes:
package:
individually: true
exclude:
- ./** # <-- remove this!

Related

Drone Pipeline : Drone Cache mount path for Maven Repository not able to resolve

I'm new to Drone pipeline and is interested to use it in my current project for CICD.
My project tech stack is as follows:
Java
Spring Boot
Maven
I have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder.
Always say the mount path is not available or not found. Please find the screen shot for the same:
Drone mount path issue
Not sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path.
Adding the pipeline information below:
kind: pipeline
type: docker
name: config-server
steps:
name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
name: build
image: maven:3.8.3-openjdk-17
pull: if-not-exists
environment:
M2_HOME: /usr/share/maven
MAVEN_CONFIG: /root/.m2
commands:
mvn clean install -DskipTests=true -B -V
volumes:
name: cache
path: /tmp/cache
name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
trigger:
branch:
main
event:
push
volumes:
name: cache
host:
path: /var/lib/cache
Thanks in advance..
Resolved the issue. Please find the solution below and working drone pipeline.
kind: pipeline
type: docker
name: data-importer
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
ttl: 1
cache_key: "volume"
archive_format: "gzip"
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
- name: maven-build
image: maven:3.8.6-amazoncorretto-11
pull: if-not-exists
commands:
- mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V
volumes:
- name: cache
path: /tmp/cache
- name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
ttl: 1
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
trigger:
branch:
- main
- feature/*
event:
- push
volumes:
- name: cache
host:
path: /var/lib/cache

How to access environment variables and pass to Lambda function using useDotenv: true option in serverless.yml?

I am trying to pass environment variables to my Lambda function in serverless.yml (version 2.32.0) but I am not sure the way of doing it. The doucmentaiton: https://www.serverless.com/framework/docs/environment-variables/ doesn't mention how to. Right now, I am using the useDotenv: true option and then trying to access the environment variables by ${process.env.ENV1} but it is not working. Below is my serverless.yml file:
serverless.yml
service: service-name
frameworkVersion: "2.32.0"
useDotenv: true
provider:
name: aws
versionFunctions: false
runtime: nodejs12.x
region: <region>
stage: dev
profile: default
functions:
function-name:
handler: handler
timeout: 120
environment:
ENV1: ${process.env.ENV1}
ENV2: ${process.env.ENV2}
ENV3: ${process.env.ENV3}
I am getting no errors or warning when I run sls deploy but the environment variables are not being uploaded. How would I be able to do it?
Okay I got it by replacing process.env. with env:
serverless.yml:
service: service-name
frameworkVersion: "2.32.0"
useDotenv: true
provider:
name: aws
versionFunctions: false
runtime: nodejs12.x
region: <region>
stage: dev
profile: default
functions:
function-name:
handler: handler
environment:
ENV1: ${env:ENV1}
ENV2: ${env:ENV2}
ENV3: ${env:ENV3}
timeout: 120

aws serverless.yml file "A valid option to satisfy the declaration 'opt:stage' could not be found" error

Getting below warning when trying to run serverless.
Serverless Warning --------------------------------------------
A valid option to satisfy the declaration 'opt:stage' could not be found.
Below is my serverless.yml file
# Serverless Config
service: api-service
# Provider
provider:
name: aws
runtime: nodejs8.10
region: ${opt:region, 'ap-east-1'}
stage: ${opt:stage, 'dev'}
# Enviroment Varibles
environment:
STAGE: ${self:custom.myStage}
MONGO_DB_URI: ${file(./serverless.env.yml):${opt:stage}.MONGO_DB_URI}
LAMBDA_ONLINE: ${file(./serverless.env.yml):${opt:stage}.LAMBDA_ONLINE}
# Constants Varibles
custom:
# environments Variables used for convert string in upper case format
environments:
myStage: ${opt:stage, self:provider.stage}
stages:
- dev
- qa
- staging
- production
region:
dev: 'ap-east-1'
stage: 'ap-east-1'
production: 'ap-east-1'
# Function
functions:
testFunc:
handler: index.handler
description: ${opt:stage} API's
events:
- http:
method: any
path: /{proxy+}
cors:
origin: '*'
#package
package:
exclude:
- .env
- node_modules/aws-sdk/**
- node_modules/**
In the description of the testFunc you're using ${opt:stage}. If you use that directly you need to pass the --stage flag when you run the deploy command.
What you should do there is to use the ${self:provider.stage}, because there you will have the stage calculated.
I will suggest you to do below implementation
provider:
name: aws
runtime: nodejs8.10
region: ${opt:region, self:custom.environments.region.${self:custom.environments.myStage}}
stage: ${opt:stage, self:custom.environments.myStage}
# Enviroment Varibles
environment:
STAGE: ${self:custom.myStage}
MONGO_DB_URI: ${file(./serverless.env.yml):${self:provider.stage}.MONGO_DB_URI}
LAMBDA_ONLINE: ${file(./serverless.env.yml):${self:provider.stage}.LAMBDA_ONLINE}
# Constants Varibles
custom:
# environments Variables used for convert string in upper case format
environments:
# set the default stage if not specified
myStage: dev
stages:
- dev
- qa
- staging
- production
region:
dev: 'ap-east-1'
stage: 'ap-east-1'
production: 'ap-east-1'
Basically, if stage and region is not specified using command line, then use defaults. Otherwise the command line one will be used.

Fixing 'invalid reference format' error in docker-image-resource put

Currently trying to build and push docker images, issue is that I'm receiving a this message from concourse during wordpress-release put step:
waiting for docker to come up...
invalid reference format
Here's the important bit of the Concourse Pipeline.
- name: wordpress-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/wordpress-release
aws_access_key_id: #############
aws_secret_access_key: #############
- name: mysql-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/mysql-release
aws_access_key_id: #############
aws_secret_access_key: #############
jobs:
- name: job-hello-world
plan:
- get: wordpress-website
- task: write-release-tag
config:
platform: linux
image_resource:
type: registry-image
source: { repository: alpine/git }
inputs:
- name: wordpress-website
outputs:
- name: tags
run:
dir: wordpress-website
path: sh
args:
- -exc
- |
printf $(basename $(git remote get-url origin) | sed 's/\.[^.]*$/-/')$(git tag --points-at HEAD) > ../tags/release-tag
- put: wordpress-release
params:
build: ./wordpress-website/.
dockerfile: wordpress-website/shared-wordpress-images/wordpress/wordpress-release/Dockerfile
tag_file: tags/release-tag
- put: mysql-release
params:
build: ./wordpress-website/
dockerfile: wordpress-website/shared-wordpress-images/db/mysql-release/Dockerfile
tag_file: tags/release-tag
Those images contain FROM #############.dkr.ecr.eu-west-1.amazonaws.com/shared-mysql (and shared-wordpress) could this be an issue?
The tag_file: tags/release-tag, doesn't seem to be the issue as even without it, this still happens.
This is Concourse 5.0 running on top of Docker in Windows 10.
Any thoughts?

Jenkins installation automation

Old Question
Is that possible to automate Jenkins installation(Jenkins binaries, plugins, credentials) by using any of the configuration management automation tool like Ansible and etc?
Edited
After this question asked I have learned and found many ways to achieve Jenkins Installation. I found docker-compose is interesting to achieve one way of Jenkins Installation automation. So my question is, Is there a better way to automate Jenkins Installation than I am doing, Is there any risk in the way I am handling this automation.
I have taken the advantage of docker Jenkins image and did the automation with docker-compose
Dockerfile
FROM jenkinsci/blueocean
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code blueocean matrix-auth
docker-compose.yaml
version: '3.7'
services:
dind:
image: docker:dind
privileged: true
networks:
jenkins:
aliases:
- docker
expose:
- "2376"
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
jcac:
image: nginx:latest
volumes:
- type: bind
source: ./jcac.yml
target: /usr/share/nginx/html/jcac.yml
networks:
- jenkins
jenkins:
build: .
ports:
- "8080:8080"
- "50000:50000"
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
- JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
- CASC_JENKINS_CONFIG=http://jcac/jcac.yml
- GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN:-fake}
- GITHUB_USERNAME=${GITHUB_USERNAME:-fake}
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
read_only: true
networks:
- jenkins
volumes:
jenkins-home:
jenkins-docker-certs:
networks:
jenkins:
jcac.yaml
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "github"
password: ${GITHUB_PASSWORD:-fake}
scope: GLOBAL
username: ${GITHUB_USERNAME:-fake}
- usernamePassword:
id: "slave"
password: ${SSH_PASSWORD:-fake}
username: ${SSH_USERNAME:-fake}
jenkins:
globalNodeProperties:
- envVars:
env:
- key: "BRANCH"
value: "hello"
systemMessage: "Welcome to (one click) Jenkins Automation!"
agentProtocols:
- "JNLP4-connect"
- "Ping"
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
disableRememberMe: false
markupFormatter: "plainText"
mode: NORMAL
myViewsTabBar: "standard"
numExecutors: 4
# nodes:
# - permanent:
# labelString: "slave01"
# launcher:
# ssh:
# credentialsId: "slave"
# host: "worker"
# port: 22
# sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
# name: "slave01"
# nodeDescription: "SSH Slave 01"
# numExecutors: 3
# remoteFS: "/home/jenkins/workspace"
# retentionStrategy: "always"
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "${ADMIN_PASSWORD:-admin123}" #
- id: "user"
password: "${DEFAULTUSER_PASSWORD:-user123}"
authorizationStrategy:
globalMatrix:
permissions:
- "Agent/Build:user"
- "Job/Build:user"
- "Job/Cancel:user"
- "Job/Read:user"
- "Overall/Read:user"
- "View/Read:user"
- "Overall/Read:anonymous"
- "Overall/Administer:admin"
- "Overall/Administer:root"
unclassified:
globalLibraries:
libraries:
- defaultVersion: "master"
implicit: false
name: "jenkins-shared-library"
retriever:
modernSCM:
scm:
git:
remote: "https://github.com/samitkumarpatel/jenkins-shared-libs.git"
traits:
- "gitBranchDiscovery"
The command to start and stop Jenkins are
# start Jenkins
docker-compose up -d
# stop Jenkins
docker-compose down
Sure it is :) For Ansible you can always check Ansible Galaxy whenever you want to automate installation of something. Here is the most popular role for installing Jenkins. And here is its GitHub repo

Resources