serverless offline won't run offline: Failed to load resource: net::ERR_CONNECTION_REFUSED - serverless

PROBLEM
I cannot get serverless offline to run when not connected to internet.
serverless.yml
service: my-app
plugins:
- serverless-offline
# run on port 4000, because client runs on 3000
custom:
serverless-offline:
port: 4000
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
provider:
name: aws
runtime: nodejs10.x
functions:
getData:
handler: data-service.getData
events:
- http:
path: data/get
method: get
cors: true
isOffline: true
saveData:
handler: data-service.saveData
events:
- http:
path: data/save
method: put
cors: true
isOffline: true
To launch serverless offline, I run serverless offline start in terminal. This works when I am connected to the internet, but when offline, I get the following errors:
Console Error
:4000/data/get:1 Failed to load resource: net::ERR_CONNECTION_REFUSED
20:34:02.820 localhost/:1 Uncaught (in promise) TypeError: Failed to fetch
Terminal Error
FetchError: request to https://api.serverless.com/core/tenants/{tenant}/applications/my-app/profileValue failed, reason: getaddrinfo ENOTFOUND api.serverless.com api.serverless.com:443
Request
I suspect the cause is because I am not sure how to setup offline using instruction: "The event object passed to your λs has one extra key: { isOffline: true }. Also, process.env.IS_OFFLINE is true."
Any assistance on how to debug the issue would be much appreciated.

Probably you already fix it, but the problem is because app and org attribute
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
When you use it, serverless will use config set on serverless.com, commonly env var.
To use env var, you can use plugin serverless-dotenv-plugin. This way, you don't need to connect on internet.

Related

How to disable apikey for local serverless development?

I created a simple api (using serverless) which is protected by an apikey (when deployed via $ serverless deploy). However, for local development ($ serverless offline) I do not want to use an api key. How can I disable this for local only?
This is my serverless.yml:
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
Note: I am aware that I could simply set private: false when doing local development but this is quite tedious when there is a long list of functions.
The solution was to use the --noAuth option:
serverless offline --noAuth

"invalid peer certificate contents..." while running `slack run` using slack-cli running on deno

I am on MacOS (M1) trying to run slack run. When I call the Workflow and submit very simple form, I get error:
🚫 Error: exit status 1
error: Uncaught (in promise) TypeError: error sending request for url (https://slack.com/api/functions.completeSuccess): error trying to connect: invalid peer certificate contents: invalid peer certificate: UnknownIssuer
const resp = await fetch(url, {
^
at async mainFetch (deno:ext/fetch/26_fetch.js:247:14)
at async fetch (deno:ext/fetch/26_fetch.js:464:9)
at async BaseSlackAPIClient.apiCall (https://deno.land/x/deno_slack_api#0.0.2/base-client.ts:20:18)
at async RunFunction (https://deno.land/x/deno_slack_runtime#0.5.0/run-function.ts:57:5)
at async DispatchPayload (https://deno.land/x/deno_slack_runtime#0.5.0/dispatch-payload.ts:64:16)
at async runLocally (https://deno.land/x/deno_slack_runtime#0.5.0/local-run-function.ts:29:16)
at async https://deno.land/x/deno_slack_runtime#0.5.0/local-run-function.ts:50:3
Slack version:
Using slack v1.16.4
Deno version:
deno 1.28.3 (release, aarch64-apple-darwin)
v8 10.9.194.5
typescript 4.8.3
Can anyone help on this please?
I tried to submit workflow form that is supposed to input a message and enrich it and send it back to channel.
If you are behind a proxy you can set the HTTP_PROXY and the HTTPS_PROXY environment variables.
Or if you need to load a CA from a local pem file you can set the DENO_CERT variable
see Deno Environment Documentation

Error "Property Listeners cannot be empty" occurs when deploy ruby-on-rails project

I am newbie in AWS Cloudformation. My Elastic Beanstalk Worker uses Ruby on Rails. The EB is a Stack based on cloudformation template.
I don’t know why, when I deploy (eb deploy) recently, Event gave the following error message:
The AWSEBLoadBalancer is not in Resources: of the template. But I find it in .ebextensions of the source code.
Resources:
AWSEBLoadBalancer:
Properties:
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"
LogsBucket:
DeletionPolicy: Retain
Type: "AWS::S3::Bucket"
LogsBucketPolicy:
Properties:
Bucket:
Ref: LogsBucket
PolicyDocument:
Statement:
-
Action:
- "s3:PutObject"
Effect: Allow
Principal:
AWS:
? "Fn::FindInMap"
:
- Region2ELBAccountId
-
Ref: "AWS::Region"
- AccountId
Resource:
? "Fn::Join"
:
- ""
-
- "arn:aws:s3:::"
-
Ref: LogsBucket
- /AWSLogs/
-
Ref: "AWS::AccountId"
Can you please give me some hints to solve this problem?
The error message says that you are missing Listeners. With the Listeners your balancer definition would be something like (need to modify to your own settings):
AWSEBLoadBalancer:
Properties:
Listeners:
- InstancePort: 80
InstanceProtocol: HTTP
LoadBalancerPort: 80
#PolicyNames:
# - String
Protocol: HTTP
#SSLCertificateId: String
AccessLoggingPolicy:
EmitInterval: 5
Enabled: true
S3BucketName:
Ref: LogsBucket
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
DependsOn: "LogsBucketPolicy"

ECONNRESET when opening a large number of connection in small time period

I have situation where I want to create large number of entities on orion. I am using docker version of Orion and mongo with this docker-compose.
version: "3"
services:
mongo:
image: mongo:3.4
volumes:
- /data/docker-mongo/db:/data/db
- /data/docker-mongo/log/mongodb.log:/var/log/mongodb/mongod.log
command: --nojournal
orion:
image: fiware/orion
volumes:
- /data/docker-mongo/log/contextBroker.log:/tmp/contextBroker.log
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo
Now problems happens when I want to upload 2000 entities (opening new connection for each, I know it can be done different but for now this is request), I successfully create no more than 600 (or less never exact number) of them rest fail to create with error:
"error": {
"errno": "ECONNRESET",
"code": "ECONNRESET",
"syscall": "read"
},
So I assume this issue has something to do with maxConnections, reqPoolSize etc settings in Orion. But in docker I failed to locate Orion config file, I have no way of knowing when I type commands like contextBroker -maxConnections 123456 that setting is being accepted by Orion and docker container.
Also log of Orion is empty, and i cannot determined what is causing this issue when Orion is running on docker.
So main questions:
Can Orion running on docker be used in same manner as Orion running on VM (are there some fallbacks)
And how do I check this problem when Orion is running in docker, because I read a lot of docs/issues but no luck (or I missed something).
If you have some advice/soultion it would really help.
Thanks
{
"orion" : {
"version" : "1.13.0-next",
"uptime" : "2 d, 15 h, 46 m, 34 s",
"git_hash" : "ae72acf9e8eeaacaf4eb138f7de37bfee4514c6b",
"compile_time" : "Fri May 4 10:12:18 UTC 2018",
"compiled_by" : "root",
"compiled_in" : "1901fd6bb51a",
"release_date" : "Fri May 4 10:12:18 UTC 2018",
"doc" : "https://fiware-orion.readthedocs.org/en/master/"
}
}
{ Error: socket hang up
at createHangUpError (_http_client.js:313:15)
at Socket.socketOnEnd (_http_client.js:416:23)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1090:12)
at process._tickCallback (internal/process/next_tick.js:63:19) code: 'ECONNRESET' }
error:
{ Error: connect ECONNREFUSED ipofvirtualm:1026
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'read',
address: 'ipofvm',
port: 1026 },
options:
{ method: 'POST',
uri: 'http://ip:1026/v2/entities?options=keyValues',
headers:
{ 'Fiware-Service': 'some service',
'Fiware-ServicePath': 'some servicepath' },
body:
{ id: 'F0B935',
type: 'Transaction',
refEmitter: 'F0B935',
refReceiver: '7501JXG',
refCapturer: 'testtdata',
date: '12/12/2017 13:25',
refTransferredResources: 'testtdata',
transferredLoad: 92 },
json: true,
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
I am using request promise library for making calls, i try others they had same issue. Now since i cannot send u all 2000 responses i will try to describe. So it when i start to send this it behave. It create like 30 entities then next few or more return response saying ECONNRESET then it start creating again and so on.
What confuse me is that it is not failing totally meaning it works but not as intended. Also it seem that Orion close socket or hang it up some period then he is open again and create as normal and so on. If u need any more info ask, and thanks for quick answer.
instead of opening a new connection per entity why don't you use
POST /v2/op/update
and create all entities in just one batch? or a couple of batches
See some code at
https://github.com/Fiware/dataModels/blob/master/Weather/WeatherObserved/harvest/spain_weather_observed_harvest.py#L235
With regards to CLI argument passing to CB running inside docker, use the command line in docker compose file, eg:
command: -dbhost mongo -maxConnections 123456
However, I'm not sure that would help to solve the problem, as Orion should deal with your use case without any special customization. Looking to the error message (which seems to be about some problema at TCP layer) I wonder if docker networking layer is acting as bottleneck in some way...
In addition, the suggestion done by Jose Manuel Cantera about using POST /v2/op/update would be a good idea. It would reduce connection stress at network layer and may help to alleviate the problem.
If you cannot change your update strategy, maybe using an inter-request delay (100-200ms) could also help.

Zuul Proxy not able to route, resulting in com.netflix.zuul.exception.ZuulException: Forwarding error

I have simple services as:
transactions-core-service and transactions-api-service.
transactions-api-service invokes transactions-core-service to return a list of transactions. transactions-api-service is enabled with hystrix command.
Both are registered in Eureka server with below services ids:
TRANSACTIONS-API-SERVICE n/a (1) (1) UP (1) - 192.168.2.12:transactions-api-service:8083
TRANSACTIONS-CORE-SERVICE n/a (1) (1) UP (1) - 192.168.2.12:transactions-core-service:8087
Below is Zuul server:
#SpringBootApplication
#Controller
#EnableZuulProxy
public class ZuulApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(ZuulApplication.class).web(true).run(args);
}
}
Zuul Configurations:
===============================================
info:
component: Zuul Server
server:
port: 8765
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
logging:
level:
ROOT: INFO
org.springframework.web: DEBUG
===============================================
When I try to invoke transactions-api-service with url (http://localhost:8765/transactions/accounts/123/transactions/786) I get Zuul Exception:
2016-02-13 11:29:29.050 WARN 4936 --- [nio-8765-exec-1]
o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
com.netflix.zuul.exception.ZuulException: Forwarding error
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:131)
~[spring-cloud-net flix-core-1.1.0.M3.jar:1.1.0.M3]
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:76)
~[spring-cloud-netflix- core-1.1.0.M3.jar:1.1.0.M3] ......
If I invoke the transactions-api-service individually (with localhost /accounts/123/transactions/786), it works fine.
Am I missing any configurations on Zuul?
You need to change zuul execution timeout by adding this property in application.yml of zuul server:
# Increase the Hystrix timeout to 60s (globally)
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 60000
Please refer to this thread on netflix issues: https://github.com/spring-cloud/spring-cloud-netflix/issues/321
Faced same issue. In my case, zuul was using service discovery. As a solution, below configuration worked like a charm.
ribbon.ReadTimeout=60000
Reference to the property usage is here.
You have an incorrect indentation. Instead of:
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
It should be:
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
you can use this to avoid 500 error
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=1000000
zuul.host.connect-timeout-millis=10000
zuul.host.socket-timeout-millis=1000000
In case if your Zuul gateway uses discovery service for service lookup in that case you can disable the hystrix timeout or increase the hysterix timeout as below :
# Disable Hystrix timeout globally (for all services)
hystrix.command.default.execution.timeout.enabled: false
#To disable timeout foror particular service,
hystrix.command.<serviceName>.execution.timeout.enabled: false
# Increase the Hystrix timeout to 60s (globally)
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000
# Increase the Hystrix timeout to 60s (per service)
hystrix.command.<serviceName>.execution.isolation.thread.timeoutInMilliseconds: 60000
I was having same issue with zuul server, it got resolved with below property
Let's say you have 2 clients clientA and clientB,
so for clientA, spring.application.name=clientA and server.port=1111
for clientB spring.application.name=clientB and server.port=2222 in there respective application.propeties files.
You want to connect this 2 servers to ZuulServer which is running on port 8087.
add below properties in you ZuulServer application.properties file
spring.application.name=gateway-service
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
clientA.ribbon.listOfServers=http://localhost:1111
clientB.ribbon.listOfServers=http://localhost:2222
server.port=8087
Note: I am using Eureka Client with my Zuul Server. you can skip that part. Adding this solution in case its helpful for someone.

Resources