I created a simple api (using serverless) which is protected by an apikey (when deployed via $ serverless deploy). However, for local development ($ serverless offline) I do not want to use an api key. How can I disable this for local only?
This is my serverless.yml:
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
Note: I am aware that I could simply set private: false when doing local development but this is quite tedious when there is a long list of functions.
The solution was to use the --noAuth option:
serverless offline --noAuth
Related
I deployed Neo4j in my AKS cluster using the standalone Helm chart.
It all gets deployed and my Node.js server connects to Neo4j correctly.
However queries throw the Neo4jError: Unknown function 'apoc.convert.fromJsonMap' error, so apoc is clearly missing.
I followed the procedure described here https://neo4j.com/docs/operations-manual/current/kubernetes/configuration/#operations-installing-plugins and my Values are here below.
The only difference I find is that in the guide apoc core is actually enabled afterwards by upgrading the helm chart, while I'm installing it with the option enabled already.
Looking at https://neo4j.com/docs/apoc/current/config/ I saw
As of Neo4j v.5.0, APOC config settings are no longer supported in the neo4j.conf file. Please move all apoc.* settings to apoc.conf. It is also possible to set the config settings using environment variables.
so as neo4j-standalone is using version 4.4.16 I moved the apoc configurations from apoc.config to neo4.config but still apoc procedures are not found by the queries.
Is there something I'm missing out to configure in order to enable apoc?
Thank you very much.
neo4j-db:
# neo4j-standalone:
nameOverride: "neo4j"
fullnameOverride: 'neo4j'
neo4j:
# Name of your cluster
name: "fixit-neo4j" # this will be the label: app: value for the service selector
password: "password"
##
passwordFromSecret: ""
passwordFromSecretLookup: false
edition: "community"
acceptLicenseAgreement: "yes"
offlineMaintenanceModeEnabled: false
resources:
cpu: "1000m"
memory: "2Gi"
volumes:
data:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-data
resources:
requests:
storage: 4Gi
backups:
mode: 'share' # share an existing volume (e.g. the data volume)
share:
name: 'logs'
logs:
mode: 'volumeClaimTemplate'
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: neo4j-sc-logs
resources:
requests:
storage: 4Gi
services:
# A ClusterIP service with the same name as the Helm Release name should be used for Neo4j Driver connections originating inside the
# Kubernetes cluster.
default:
# Annotations for the K8s Service object
annotations: { }
# A LoadBalancer Service for external Neo4j driver applications and Neo4j Browser
neo4j:
### this would create cluster-neo4j svc
enabled: false
# env:
# NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled : "true"
server.bolt.tls_level: "REQUIRED"
server.bolt.listen_address: "0.0.0.0:7687"
dbms.ssl.policy.bolt.client_auth: "NONE"
dbms.ssl.policy.bolt.enabled: "true"
server.directories.plugins: "/var/lib/neo4j/labs"
dbms.security.procedures.unrestricted: "apoc.*"
server.config.strict_validation.enabled: "false"
dbms.security.procedures.allowlist: "gds.*,apoc.*"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
startupProbe:
failureThreshold: 1000
periodSeconds: 50
ssl:
# setting per "connector" matching neo4j config
bolt:
privateKey:
secretName: tls-secret
subPath: tls.key
publicCertificate:
secretName: tls-secret
subPath: tls.crt
trustedCerts:
sources: [ ]
revokedCerts:
sources: [ ]
OK after a bit of looking at quite a few issues on the same subject, I found that some solutions for this problem was to add dbms.directories.plugins: "/var/lib/neo4j/labs" and dbms.config.strict_validation: "false" in the config section which, as I understand it, mirrors these settings both for server and dbms. It indeed worked, but it's weird that in the official guide it's not mentioned. I mean, these mirrored settings make sense, tell both the server and the dbms where to look for plugins, but still it should be mentioned. I see so many post about this, which means the documentation is not clear enough. It's easy to take things for granted and in fact because this mirrored plugin location both for the server AND dbms need is just not stated anywhere in the docs, I as many others thought that dbms was already configured with the same location as server.directories.plugins: "/var/lib/neo4j/labs" ( which the docs say to configure ) and haven't added it, but hey.. ain't nobody's perfect I guess. Hope they change the docs then for future devs' sake, but meanwhile this answer could be helpful.
So the correct configuration is
env:
NEO4J_PLUGINS: '["graph-data-science"]'
config:
server.bolt.enabled: 'true'
server.bolt.tls_level: 'REQUIRED'
server.bolt.listen_address: '0.0.0.0:7687'
dbms.ssl.policy.bolt.client_auth: 'NONE'
dbms.ssl.policy.bolt.enabled: 'true'
## apoc
server.directories.plugins: '/var/lib/neo4j/labs'
server.config.strict_validation.enabled: 'false'
dbms.security.procedures.unrestricted: 'apoc.*'
dbms.security.procedures.allowlist: 'gds.*,apoc.*'
### additional needed dbms config mirroring server config
dbms.directories.plugins: "/var/lib/neo4j/labs"
dbms.config.strict_validation: "false"
apoc_config:
apoc.trigger.enabled: "true"
apoc.jdbc.neo4j.url: "jdbc:foo:bar"
apoc.import.file.enabled: "true"
It seems the docs are missing installing the APOC plugin. Change the following line to include APOC as well:
NEO4J_PLUGINS: '["graph-data-science", "apoc"]'
and you should be good
I'm trying to split my serverless configuration file serverless.yml into multiple files using the ${file(...)} syntax. I have a provider file config/serverless/provider.yml with the following content:
provider:
name: ...
...
And in my serverless.yml I've used it in the following way:
provider: ${file(config/serverless/provider.yml):provider}
However, when I run serverless deploy, I get the following error:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "provider": Value not found at "file"
Please help me to understand how to properly include other files in my serverless configuration.
NOTE: I've also tried with this config without success
provider.yml
name: ...
...
serverless.yml
...
resources:
- ${file(config/serverless/provider.yml)}
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "provider": Value not found at "file"
The error is not finding a value at "file" double check the file path.
Make sure the path you are using in ${file(<this-path-here>):provider}
It is pointing to the correct file.
The following works:
File: serverless.yml
service: serverless-framework-include-files
frameworkVersion: "3"
provider: ${file(config/provider.yml):provider}
functions:
function1:
handler: index.handler
File: config/provider.yml
provider:
name: aws
runtime: nodejs18.x
Alternatively, you can remove the :provider from the ${file(...):provider}:
The following works:
File: serverless2.yml
service: serverless-framework-include-files
frameworkVersion: "3"
provider: ${file(config/provider2.yml)}
functions:
function1:
handler: index.handler
File: config/provider2.yml
name: aws
runtime: nodejs18.x
For resources:, the same applies.
This works:
File: serverless.yml
resources:
- ${file(resources/s3-bucket.yml)}
- ${file(resources/dynamodb-table.yml)}
File: resources/s3-bucket.yml
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket
AccessControl: PublicRead
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
Alternatively, you can do:
File serverless2.yml
resources:
Resources:
MyBucket: ${file(resources/s3-bucket2.yml)}
MyTable: ${file(resources/dynamodb-table2.yml)}
File resources/dynamodb-table2.yml
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:service}
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
You can check the GitHub code for a working example:
https://github.com/oieduardorabelo/serverless-framework-include-files
I am newbie in AWS Cloudformation. My Elastic Beanstalk Worker uses Ruby on Rails. The EB is a Stack based on cloudformation template.
I don’t know why, when I deploy (eb deploy) recently, Event gave the following error message:
The AWSEBLoadBalancer is not in Resources: of the template. But I find it in .ebextensions of the source code.
Resources:
AWSEBLoadBalancer:
Properties:
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"
LogsBucket:
DeletionPolicy: Retain
Type: "AWS::S3::Bucket"
LogsBucketPolicy:
Properties:
Bucket:
Ref: LogsBucket
PolicyDocument:
Statement:
-
Action:
- "s3:PutObject"
Effect: Allow
Principal:
AWS:
? "Fn::FindInMap"
:
- Region2ELBAccountId
-
Ref: "AWS::Region"
- AccountId
Resource:
? "Fn::Join"
:
- ""
-
- "arn:aws:s3:::"
-
Ref: LogsBucket
- /AWSLogs/
-
Ref: "AWS::AccountId"
Can you please give me some hints to solve this problem?
The error message says that you are missing Listeners. With the Listeners your balancer definition would be something like (need to modify to your own settings):
AWSEBLoadBalancer:
Properties:
Listeners:
- InstancePort: 80
InstanceProtocol: HTTP
LoadBalancerPort: 80
#PolicyNames:
# - String
Protocol: HTTP
#SSLCertificateId: String
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"
PROBLEM
I cannot get serverless offline to run when not connected to internet.
serverless.yml
service: my-app
plugins:
- serverless-offline
# run on port 4000, because client runs on 3000
custom:
serverless-offline:
port: 4000
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
provider:
name: aws
runtime: nodejs10.x
functions:
getData:
handler: data-service.getData
events:
- http:
path: data/get
method: get
cors: true
isOffline: true
saveData:
handler: data-service.saveData
events:
- http:
path: data/save
method: put
cors: true
isOffline: true
To launch serverless offline, I run serverless offline start in terminal. This works when I am connected to the internet, but when offline, I get the following errors:
Console Error
:4000/data/get:1 Failed to load resource: net::ERR_CONNECTION_REFUSED
20:34:02.820 localhost/:1 Uncaught (in promise) TypeError: Failed to fetch
Terminal Error
FetchError: request to https://api.serverless.com/core/tenants/{tenant}/applications/my-app/profileValue failed, reason: getaddrinfo ENOTFOUND api.serverless.com api.serverless.com:443
Request
I suspect the cause is because I am not sure how to setup offline using instruction: "The event object passed to your λs has one extra key: { isOffline: true }. Also, process.env.IS_OFFLINE is true."
Any assistance on how to debug the issue would be much appreciated.
Probably you already fix it, but the problem is because app and org attribute
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
When you use it, serverless will use config set on serverless.com, commonly env var.
To use env var, you can use plugin serverless-dotenv-plugin. This way, you don't need to connect on internet.
I am trying to build a Docker container with existing datasources, dashboards and notification channels. The provisioning of datasources and dashboards are working but not the provisioning of Notification Channels. Using Grafana v6.3.5 (commit: 67bad72)
I am using the example config from the Grafana Provisioning documentation. I have added it to the /etc/grafana/provisioning/notifiers directory to a file called AlertNotificationChannel.yaml
I can see it is processing the file because I can see a message "Deleting alert notification logger=provisioning.notifiers name=notification-channel-1 uid=notifier1" in the logs. However no messages about inserting or updating alert notification and nothing in UI.
Contents of yaml file:
notifiers:
- name: notification-channel-1
type: slack
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
is_default: true
send_reminder: true
frequency: 1h
disable_resolve_message: false
# See `Supported Settings` section for settings supporter for each
# alert notification type.
settings:
recipient: "XXX"
token: "xoxb"
uploadImage: true
url: https://slack.com
delete_notifiers:
- name: notification-channel-1
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
I believe this functionality was added after v5 of Grafana and I am trying to follow the documentation but not working.
So I was having the same issue for a little bit today and I was able to make it work. I want to guess that you ended up finding a solution but I find it useful to post an example of something that works for future people going through this issue. The reason nothing was appearing in the UI is probably cause they were a mistake somewhere.
This is an example of my docker-compose:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
user: "0"
ports:
- "3000:3000"
volumes:
- type: bind
source: "/root/Docker/grafana/grafana"
target: "/var/lib/grafana"
- type: bind
source: "/root/Docker/grafana/provisioning"
target: "/etc/grafana/provisioning"
This is an example of my "/grafana/provisioning/notifiers/slack.yml"
notifiers:
- name: slack-alarming
type: slack
username: Grafa_Alert
is_default: true
send_reminder: true
org_name: LML
settings:
uploadImage: true
url: POSTHOOKURL from slack
Note that the org Name is the name of my company and the username is random.
Thanks,
Wassim