I'm trying to split my serverless configuration file serverless.yml into multiple files using the ${file(...)} syntax. I have a provider file config/serverless/provider.yml with the following content:
provider:
name: ...
...
And in my serverless.yml I've used it in the following way:
provider: ${file(config/serverless/provider.yml):provider}
However, when I run serverless deploy, I get the following error:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "provider": Value not found at "file"
Please help me to understand how to properly include other files in my serverless configuration.
NOTE: I've also tried with this config without success
provider.yml
name: ...
...
serverless.yml
...
resources:
- ${file(config/serverless/provider.yml)}
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "provider": Value not found at "file"
The error is not finding a value at "file" double check the file path.
Make sure the path you are using in ${file(<this-path-here>):provider}
It is pointing to the correct file.
The following works:
File: serverless.yml
service: serverless-framework-include-files
frameworkVersion: "3"
provider: ${file(config/provider.yml):provider}
functions:
function1:
handler: index.handler
File: config/provider.yml
provider:
name: aws
runtime: nodejs18.x
Alternatively, you can remove the :provider from the ${file(...):provider}:
The following works:
File: serverless2.yml
service: serverless-framework-include-files
frameworkVersion: "3"
provider: ${file(config/provider2.yml)}
functions:
function1:
handler: index.handler
File: config/provider2.yml
name: aws
runtime: nodejs18.x
For resources:, the same applies.
This works:
File: serverless.yml
resources:
- ${file(resources/s3-bucket.yml)}
- ${file(resources/dynamodb-table.yml)}
File: resources/s3-bucket.yml
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket
AccessControl: PublicRead
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
Alternatively, you can do:
File serverless2.yml
resources:
Resources:
MyBucket: ${file(resources/s3-bucket2.yml)}
MyTable: ${file(resources/dynamodb-table2.yml)}
File resources/dynamodb-table2.yml
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:service}
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
You can check the GitHub code for a working example:
https://github.com/oieduardorabelo/serverless-framework-include-files
Related
I created a simple api (using serverless) which is protected by an apikey (when deployed via $ serverless deploy). However, for local development ($ serverless offline) I do not want to use an api key. How can I disable this for local only?
This is my serverless.yml:
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
Note: I am aware that I could simply set private: false when doing local development but this is quite tedious when there is a long list of functions.
The solution was to use the --noAuth option:
serverless offline --noAuth
I have just upgraded the test container lib from github.com/testcontainers/testcontainers-go v0.12.0 to github.com/testcontainers/testcontainers-go v0.13.0
previously this is the way I was creating a request
ContainerRequest: testcontainers.ContainerRequest{
Image: mountebankImage,
Name: uuid.New().String(),
ExposedPorts: []string{mountebankExposedPort},
BindMounts: map[string]string{"/mountebank": path.Join(c.rootDir, "/test/stubs/mountebank")},
Entrypoint: []string{"mb", "start", "--configfile", "/mountebank/imposters.ejs"},
Networks: []string{c.network.Name},
In the recent version of the test container library, BindMounts(not supported anymore link) got replaced by Mounts.
Tried replacing the same in my init script however not able to find it.
BindMounts: map[string]string{"/mountebank": path.Join(c.rootDir, "/test/stubs/mountebank")},
its a part of request body. Tried with testcontainers.ContainerMounts{}etc.
Am I missing something?
The ContainerRequest object contains a list of ContainerMount objects, which document that
Source is typically either a GenericBindMountSource or a GenericVolumeMountSource
GenericBindMountSource just names a host path. You could also use a DockerBindMountSource if you needed advanced options.
So you should be able to replace that BindMounts: parameter with Mounts:
ContainerRequest: testcontainers.ContainerRequest{
Mounts: testcontainers.Mounts(testcontainers.ContainerMount{
Source: testcontainers.GenericBindMountSource{
HostPath: path.Join(c.rootDir, "/test/stubs/mountebank"),
},
Target: testcontainers.ContainerMountTarget("/mountebank"),
}),
...
},
I am newbie in AWS Cloudformation. My Elastic Beanstalk Worker uses Ruby on Rails. The EB is a Stack based on cloudformation template.
I don’t know why, when I deploy (eb deploy) recently, Event gave the following error message:
The AWSEBLoadBalancer is not in Resources: of the template. But I find it in .ebextensions of the source code.
Resources:
AWSEBLoadBalancer:
Properties:
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"
LogsBucket:
DeletionPolicy: Retain
Type: "AWS::S3::Bucket"
LogsBucketPolicy:
Properties:
Bucket:
Ref: LogsBucket
PolicyDocument:
Statement:
-
Action:
- "s3:PutObject"
Effect: Allow
Principal:
AWS:
? "Fn::FindInMap"
:
- Region2ELBAccountId
-
Ref: "AWS::Region"
- AccountId
Resource:
? "Fn::Join"
:
- ""
-
- "arn:aws:s3:::"
-
Ref: LogsBucket
- /AWSLogs/
-
Ref: "AWS::AccountId"
Can you please give me some hints to solve this problem?
The error message says that you are missing Listeners. With the Listeners your balancer definition would be something like (need to modify to your own settings):
AWSEBLoadBalancer:
Properties:
Listeners:
- InstancePort: 80
InstanceProtocol: HTTP
LoadBalancerPort: 80
#PolicyNames:
# - String
Protocol: HTTP
#SSLCertificateId: String
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"
I am trying to build a Docker container with existing datasources, dashboards and notification channels. The provisioning of datasources and dashboards are working but not the provisioning of Notification Channels. Using Grafana v6.3.5 (commit: 67bad72)
I am using the example config from the Grafana Provisioning documentation. I have added it to the /etc/grafana/provisioning/notifiers directory to a file called AlertNotificationChannel.yaml
I can see it is processing the file because I can see a message "Deleting alert notification logger=provisioning.notifiers name=notification-channel-1 uid=notifier1" in the logs. However no messages about inserting or updating alert notification and nothing in UI.
Contents of yaml file:
notifiers:
- name: notification-channel-1
type: slack
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
is_default: true
send_reminder: true
frequency: 1h
disable_resolve_message: false
# See `Supported Settings` section for settings supporter for each
# alert notification type.
settings:
recipient: "XXX"
token: "xoxb"
uploadImage: true
url: https://slack.com
delete_notifiers:
- name: notification-channel-1
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
I believe this functionality was added after v5 of Grafana and I am trying to follow the documentation but not working.
So I was having the same issue for a little bit today and I was able to make it work. I want to guess that you ended up finding a solution but I find it useful to post an example of something that works for future people going through this issue. The reason nothing was appearing in the UI is probably cause they were a mistake somewhere.
This is an example of my docker-compose:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
user: "0"
ports:
- "3000:3000"
volumes:
- type: bind
source: "/root/Docker/grafana/grafana"
target: "/var/lib/grafana"
- type: bind
source: "/root/Docker/grafana/provisioning"
target: "/etc/grafana/provisioning"
This is an example of my "/grafana/provisioning/notifiers/slack.yml"
notifiers:
- name: slack-alarming
type: slack
username: Grafa_Alert
is_default: true
send_reminder: true
org_name: LML
settings:
uploadImage: true
url: POSTHOOKURL from slack
Note that the org Name is the name of my company and the username is random.
Thanks,
Wassim
Say my OpenAPI definition has two servers. Both share the same variables. Thus I want to reference these variables to prevent duplicate code.
Actually I split my OpenAPI into files and combine it with swagger-cli bundle.
This is what it creates:
openapi: 3.0.2
info:
title: My API
description: 'some description'
version: 1.0.0
servers:
- url: 'https://stage-api.domain.com/foo/{v1}/{v2}/{v3}'
description: Staging API server for QA
variables:
v1:
description: 'variable 1'
default: 'something'
enum:
- 'foo1'
- 'foo2'
v2:
description: 'variable 2'
default: 'something'
enum:
- 'foo1'
- 'foo2'
v3:
description: 'variable 3'
default: 'something'
enum:
- 'foo1'
- 'foo2'
- url: 'https://api.domain.com/foo/{v1}/{v2}/{v3}'
description: PRODUCTION API server
variables:
region:
$ref: '#/servers/0/variables/v1'
brand:
$ref: '#/servers/0/variables/v2'
locale:
$ref: '#/servers/0/variables/v3'
paths: {}
Trying to validate this in Swagger Editor I get the following error:
Structural error at servers.1.variables.v1 should NOT have
additional properties additionalProperty: $ref Jump to line xx
Structural error at servers.1.variables.v1 should have required
property 'default' missingProperty: default Jump to line xx
Is it possible to reference the server variables or reuse them in another way?
Of course I could run swagger-cli bundle -r but I would want to prevent using that.
No, this is not supported. You can request changes to the OpenAPI Specification at
https://github.com/OAI/OpenAPI-Specification/issues
In your example, the server paths are almost the same except for the subdomain, so you can use a single server definition and make the subdomain a variable:
servers:
- url: 'https://{env}.domain.com/foo/{v1}/{v2}/{v3}'
variables:
env:
description: Environment - staging or production
default: stage-api
enum:
- stage-api
- api
# other variables
# ...