I have a problem similar to Unable to invoke lambda function from localstack via aws cli, but with a different symptom. The solution described in that SO does not work for me.
I am running on Windows 10, with the latest versions of Docker, Terraform and LocalStack (as of April/May 2021). All commands are typed into an Windows cmd window with Administrator permissions, set to the correct working folder.
I start local stack using docker-compose up -d, with the following docker-compose.yml
version: '3.2'
services:
localstack:
image: localstack/localstack-full:latest
container_name: localstack_serverless1
ports:
- '4566:4566'
- '8055:8080'
environment:
# - HOSTNAME_EXTERNAL=localstack
- COMPOSE_CONVERT_WINDOWS_PATHS=1
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LAMBDA_EXECUTOR=docker
- START_WEB=1
#- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
#- './docker.sock:/var/run/docker.sock'
The lines commented out are things I have tried that didn't make a difference.
I then run terraform init, and terraform apply, with the following input:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
# AWS Provider version held back for this issue:
# https://github.com/localstack/localstack/issues/1818
# (localstack's fix not released yet)
# version = "2.39.0"
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
ec2 = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_s3_bucket" "serverless1_bucket1" {
bucket = "serverless1-bucket1"
acl = "private"
}
resource "aws_s3_bucket_object" "upload_code_lambda1" {
bucket = "serverless1-bucket1"
key = "v1.0.0/lambda1.zip"
source = "lambda1.zip"
depends_on = [aws_s3_bucket.serverless1_bucket1]
}
resource "aws_lambda_function" "serverless1_lambda1" {
function_name = "serverless1-lambda1"
# The bucket name as created earlier with "aws s3api create-bucket"
s3_bucket = "serverless1-bucket1"
s3_key = "v1.0.0/lambda1.zip"
# "lambda1" is the filename within the zip file (lambda1.js) and "handler"
# is the name of the property under which the handler function was
# exported in that file.
handler = "lambda1.handler"
runtime = "nodejs10.x"
role = aws_iam_role.lambda_exec.arn
# Ant Waters: I have added this to make lambda creation wait until the code has been uploaded. I'm not sure if it is needed or not.
depends_on = [aws_s3_bucket_object.upload_code_lambda1]
}
# IAM role which dictates what other AWS services the Lambda function
# may access.
resource "aws_iam_role" "lambda_exec" {
name = "serverless_example_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_api_gateway_rest_api" "example" {
name = "ServerlessExample1"
description = "Terraform Serverless Application Example"
}
resource "aws_api_gateway_resource" "proxy" {
rest_api_id = aws_api_gateway_rest_api.example.id
parent_id = aws_api_gateway_rest_api.example.root_resource_id
path_part = "{proxy+}"
}
resource "aws_api_gateway_method" "proxy" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_resource.proxy.id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "lambda" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_method.proxy.resource_id
http_method = aws_api_gateway_method.proxy.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.serverless1_lambda1.invoke_arn
}
resource "aws_api_gateway_method" "proxy_root" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_rest_api.example.root_resource_id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "lambda_root" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_method.proxy_root.resource_id
http_method = aws_api_gateway_method.proxy_root.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.serverless1_lambda1.invoke_arn
}
resource "aws_api_gateway_deployment" "example" {
depends_on = [
aws_api_gateway_integration.lambda,
aws_api_gateway_integration.lambda_root,
]
rest_api_id = aws_api_gateway_rest_api.example.id
stage_name = "test"
}
resource "aws_lambda_permission" "apigw" {
statement_id = "AllowAPIGatewayInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.serverless1_lambda1.function_name
principal = "apigateway.amazonaws.com"
# The "/*/*" portion grants access from any method on any resource
# within the API Gateway REST API.
source_arn = "${aws_api_gateway_rest_api.example.execution_arn}/*/*"
}
output "base_url" {
value = aws_api_gateway_deployment.example.invoke_url
}
The gateway stuff is copied from a tutorial, and I don't understand it yet.
I can then see the lambda in "https://app.localstack.cloud/resources" and "https://app.localstack.cloud/resources/gateway", and there is an Invoke button on the gateway page.
However, when I press this nothing seems to happen, except an error log in CloudWatch:
Similarly, I can see the function using the AWS CLI, with the call:
aws lambda get-function --function-name "serverless1-lambda1" --endpoint-url=http://localhost:4566
which returns:
{
"Configuration": {
"FunctionName": "serverless1-lambda1",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:serverless1-lambda1",
"Runtime": "nodejs10.x",
"Role": "arn:aws:iam::000000000000:role/serverless_example_lambda",
"Handler": "lambda1.handler",
"CodeSize": 342,
"Description": "",
"Timeout": 3,
"MemorySize": 128,
"LastModified": "2021-05-07T10:17:32.305+0000",
"CodeSha256": "qoP7ORF4AUC8VJWLR0bGGRRKGtNrQwRj2hCa1n+3wk4=",
"Version": "$LATEST",
"VpcConfig": {},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "ea163f0f-81ce-4b3a-a0d1-7b44379c6492",
"State": "Active",
"LastUpdateStatus": "Successful",
"PackageType": "Zip"
},
"Code": {
"Location": "http://localhost:4566/2015-03-31/functions/serverless1-lambda1/code"
},
"Tags": {}
}
However, when I try to invoke it using:
aws --endpoint-url=http://localhost:4566 lambda invoke --function-name "serverless1-lambda1" output.json
The return is:
{
"errorMessage": "Lambda process returned error status code: 1. Result: . Output:\nUnable to find image 'lambci/lambda:nodejs10.x' locally\nError response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nmust specify at least one container source\njson: cannot unmarshal array into Go value of type types.ContainerJSON",
"errorType": "InvocationException",
"stackTrace": [
" File \"/opt/code/localstack/localstack/services/awslambda/lambda_api.py\", line 602, in run_lambda\n result = LAMBDA_EXECUTOR.execute(func_arn, func_details, event, context=context,\n",
" File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 176, in execute\n return do_execute()\n",
" File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 168, in do_execute\n return _run(func_arn=func_arn)\n",
" File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 149, in wrapped\n raise e\n",
" File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 145, in wrapped\n result = func(*args, **kwargs)\n",
" File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 159, in _run\n raise e\n",
" File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 147, in _run\n result = self._execute(func_arn, func_details, event, context, version)\n",
" File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 325, in _execute\n result = self.run_lambda_executor(cmd, stdin, env_vars=environment, func_details=func_details)\n",
" File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 231, in run_lambda_executor\n raise InvocationException('Lambda process returned error status code: %s. Result: %s. Output:\\n%s' %\n"
]
}
and the Docker window trace shows:
localstack_serverless1 | 2021-05-07T10:59:50:WARNING:localstack.services.awslambda.lambda_executors: Empty event body specified for invocation of Lambda "arn:aws:lambda:us-east-1:000000000000:function:serverless1-lambda1"
localstack_serverless1 | 2021-05-07T10:59:50:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: CONTAINER_ID="$(docker create -i -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY" -e NODE_TLS_REJECT_UNAUTHORIZED="$NODE_TLS_REJECT_UNAUTHORIZED" --rm "lambci/lambda:nodejs10.x" "lambda1.handler")";docker cp "/tmp/localstack/zipfile.9cb3ff88/." "$CONTAINER_ID:/var/task"; docker start -ai "$CONTAINER_ID";
localstack_serverless1 | 2021-05-07T11:00:05:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:serverless1-lambda1 result / log output:
localstack_serverless1 |
localstack_serverless1 | > Unable to find image 'lambci/lambda:nodejs10.x' locally
localstack_serverless1 | > Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
localstack_serverless1 | > must specify at least one container source
localstack_serverless1 | > json: cannot unmarshal array into Go value of type types.ContainerJSON
localstack_serverless1 | 2021-05-07T11:00:05:INFO:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-1:000000000000:function:serverless1-lambda1: Lambda process returned error status code: 1. Result: . Output:
localstack_serverless1 | Unable to find image 'lambci/lambda:nodejs10.x' locally
localstack_serverless1 | Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
localstack_serverless1 | must specify at least one container source
localstack_serverless1 | json: cannot unmarshal array into Go value of type types.ContainerJSON Traceback (most recent call last):
localstack_serverless1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 602, in run_lambda
localstack_serverless1 | result = LAMBDA_EXECUTOR.execute(func_arn, func_details, event, context=context,
localstack_serverless1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 176, in execute
localstack_serverless1 | return do_execute()
localstack_serverless1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 168, in do_execute
localstack_serverless1 | return _run(func_arn=func_arn)
localstack_serverless1 | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 149, in wrapped
localstack_serverless1 | raise e
localstack_serverless1 | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 145, in wrapped
localstack_serverless1 | result = func(*args, **kwargs)
localstack_serverless1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 159, in _run
localstack_serverless1 | raise e
localstack_serverless1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 147, in _run
localstack_serverless1 | result = self._execute(func_arn, func_details, event, context, version)
localstack_serverless1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 325, in _execute
localstack_serverless1 | result = self.run_lambda_executor(cmd, stdin, env_vars=environment, func_details=func_details)
localstack_serverless1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 231, in run_lambda_executor
localstack_serverless1 | raise InvocationException('Lambda process returned error status code: %s. Result: %s. Output:\n%s' %
localstack_serverless1 | localstack.services.awslambda.lambda_executors.InvocationException: Lambda process returned error status code: 1. Result: . Output:
localstack_serverless1 | Unable to find image 'lambci/lambda:nodejs10.x' locally
localstack_serverless1 | Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
localstack_serverless1 | must specify at least one container source
localstack_serverless1 | json: cannot unmarshal array into Go value of type types.ContainerJSON
localstack_serverless1 |
I got this working by changing docker-compose.yml to the following:
version: '3.2'
services:
localstack:
image: localstack/localstack-full:latest
container_name: localstack_serverless1
ports:
- '4566:4566'
- '4571:4571'
- '8055:8080'
environment:
# - HOSTNAME_EXTERNAL=localstack
- COMPOSE_CONVERT_WINDOWS_PATHS=1
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LAMBDA_EXECUTOR=docker
- START_WEB=1
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=./.localstack
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock' # I don't understand what this corresponds to on my PC? But it is the only option I can get to work!
The last line is very curious, as I don't understand what that maps to on my windows PC?
Related
I am new to the block chain. Trying to run the calliper on my chaincode, in the the test network and getting the following errors. Not sure what I need to do here, to get the arguments be passed successfully to the chaincode.
Calliper command I used:
npx caliper launch manager --caliper-workspace ./ --caliper-networkconfig networks/networkConfig.yaml --caliper-benchconfig benchmarks/myPolicyBenchmark.yaml --caliper-flow-only-test --caliper-fabric-gateway-enabled
My Workload
'use strict';
const { WorkloadModuleBase } = require('#hyperledger/caliper-core');
class MyWorkload extends WorkloadModuleBase {
constructor() {
super();
}
async initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext) {
await super.initializeWorkloadModule(workerIndex, totalWorkers, roundIndex, roundArguments, sutAdapter, sutContext);
for ( let i=0; i<this.roundArguments.policies; i++) {
const policyID = `${this.workerIndex}_${i}`;
console.log(`Worker ${this.workerIndex}: Creating policy ${policyID}`);
const request = {
contractId: this.roundArguments.contractId,
contractFunction: 'CreatePolicy',
invokerIdentity: 'Admin',
contractAtguments: [policyID,"Test Policy 2","This is a test policy", "PUBLIC","PUBLIC","[\"READ\"]", "ABC", "abc#gmail.com", "[\"NONE\"]", "{}"],
readOnly: false
};
await this.sutAdapter.sendRequests(request);
}
}
async submitTransaction() {
const randomId = Math.floor(Math.random()*this.roundArguments.policies);
const myArgs = {
contractId: this.roundArguments.contractId,
contractFunction: 'ReadPolicy',
invokerIdentify: 'Admin',
//contractArguments: [`${this.workerIndex}_${randomId}`],
contractArguments: ['3'],
readOnly: true
};
await this.sutAdapter.sendRequests(myArgs);
}
async cleanupWorkloadModule() {
for (let i=0; i<this.roundArguments.policies; i++){
const policyID = `${this.workerIndex}_${i}`;
console.log(`Worker ${this.workerIndex}: Deleting policy ${policyID}`);
const request = {
contractId: this.roundArguments.contractId,
contractFunction: 'DeletePolicy',
invokerIdentity: 'Admin',
contractAtguments: [policyID],
readOnly: false
};
await this.sutAdapter.sendRequests(request);
}
}
}
function createWorkloadModule() {
return new MyWorkload();
}
module.exports.createWorkloadModule = createWorkloadModule;
The error I get is
2022.06.14-00:46:51.063 info [caliper] [caliper-worker] Info: worker 0 prepare test phase for round 0 is starting...
Worker 0: Creating policy 0_0
2022.06.14-00:46:51.078 info [caliper] [caliper-worker] Info: worker 1 prepare test phase for round 0 is starting...
Worker 1: Creating policy 1_0
2022-06-14T06:46:51.130Z - error: [Transaction]: Error: No valid responses from any peers. Errors:
peer=peer0.org2.example.com:9051, status=500, message=error in simulation: transaction returned with failure: Error: Expected 10 parameters, but 0 have been supplied
peer=peer0.org1.example.com:7051, status=500, message=error in simulation: transaction returned with failure: Error: Expected 10 parameters, but 0 have been supplied
2022.06.14-00:46:51.132 error [caliper] [connectors/v2/FabricGateway] Failed to perform submit transaction [CreatePolicy] using arguments [], with error: Error: No valid responses from any peers. Errors:
peer=peer0.org2.example.com:9051, status=500, message=error in simulation: transaction returned with failure: Error: Expected 10 parameters, but 0 have been supplied
peer=peer0.org1.example.com:7051, status=500, message=error in simulation: transaction returned with failure: Error: Expected 10 parameters, but 0 have been supplied
at newEndorsementError (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/fabric-network/lib/transaction.js:49:12)
at getResponsePayload (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/fabric-network/lib/transaction.js:17:23)
at Transaction.submit (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/fabric-network/lib/transaction.js:212:28)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async V2FabricGateway._submitOrEvaluateTransaction (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/#hyperledger/caliper-fabric/lib/connector-versions/v2/FabricGateway.js:376:26)
at async V2FabricGateway._sendSingleRequest (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/#hyperledger/caliper-fabric/lib/connector-versions/v2/FabricGateway.js:170:16)
at async V2FabricGateway.sendRequests (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/#hyperledger/caliper-core/lib/common/core/connector-base.js:78:28)
at async MyWorkload.initializeWorkloadModule (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/workload/readPolicy.js:25:13)
at async CaliperWorker.prepareTest (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/#hyperledger/caliper-core/lib/worker/caliper-worker.js:160:13)
at async WorkerMessageHandler._handlePrepareMessage (/Users/sam/my_thesis/Project/changetracker/caliper-workspace/node_modules/#hyperledger/caliper-core/lib/worker/worker-message-handler.js:210:13)
The same inputs works fine with running through the peer
peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile "${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem" -C mychannel -n policylist --peerAddresses localhost:7051 --tlsRootCertFiles "${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt" --peerAddresses localhost:9051 --tlsRootCertFiles "${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt" -c '{"function":"CreatePolicy","Args":["4","Test Policy 2","This is a test policy", "PUBLIC","PUBLIC","[\"READ\"]", "ABC", "abc#gmail.com", "[\"NONE\"]", "{}" ]}'
2022-06-14 00:58:06.489 CST 0001 INFO [chaincodeCmd] chaincodeInvokeOrQuery -> Chaincode invoke successful. result: status:200 payload:"{\"PolicyId\":\"4\",\"docType\":\"PolicyDoc\",\"PolicyName\":\"Test Policy 2\",\"PolicyDescription\":\"This is a test policy\",\"PolicyDataClassification\":\"PUBLIC\",\"PolicyAccessCategory\":\"PUBLIC\",\"PolicyAccessMethod\":[\"READ\"],\"PolicyOwner\":\"ABC\",\"PolicyApprovalStatus\":\"NEW\",\"PolicyCreatedOn\":\"Tue, 14 Jun 2022 06:58:06 GMT\",\"PolicyContactEmail\":\"abc#gmail.com\",\"PolicyRestrictions\":\"[\\\"NONE\\\"]\",\"PolicyMiscellaneous\":{}}"
I have a node grpc-server running on localhost and my grpc-client is a python flask server. If the client also runs on localhost directly then everything works as intended. Once I host the client(flask server) in a docker-container it is unable to reach the grpc-server though.
The error simply states:
RPC Target is unavaiable
I can call the flask-api from the host without issues. Also I changed the server address from 'localhost' to 'host.docker.internal', which is getting resolved correctly. Not sure if I am doing something wrong or this just doesn't work. I greatly appreciate any help or suggestions. Thanks!
Code snippets of the server, client and docke-compose :
server.js (Node)
...
const port = 9090;
const url = `0.0.0.0:${port}`;
// gRPC Credentials
import { readFileSync } from 'fs';
let credentials = ServerCredentials.createSsl(
readFileSync('./certs/ca.crt'),
[{
cert_chain: readFileSync('./certs/server.crt'),
private_key: readFileSync('./certs/server.key')
}],
false
)
...
const server = new Server({
"grpc.keepalive_permit_without_calls": 1,
"grpc.keepalive_time_ms": 10000,
});
...
server.bindAsync(
url,
credentials,
(err, port) => {
if (err) logger.error(err);
server.start();
}
);
grpc_call.py (status_update is called by app.py)
import os
import logging as logger
from os.path import dirname, join
import config.base_pb2 as base_pb2
import config.base_pb2_grpc as base_pb2_grpc
import grpc
# Read in ssl files
def _load_credential_from_file(filepath):
real_path = join(dirname(dirname(__file__)), filepath)
with open(real_path, "rb") as f:
return f.read()
# -----------------------------------------------------------------------------
def status_update(info, status, info=""):
SERVER_CERTIFICATE = _load_credential_from_file("config/certs/ca.crt")
SERVER_CERTIFICATE_KEY = _load_credential_from_file("config/certs/client.key")
ROOT_CERTIFICATE = _load_credential_from_file("config/certs/client.crt")
credential = grpc.ssl_channel_credentials(
root_certificates=SERVER_CERTIFICATE,
private_key=SERVER_CERTIFICATE_KEY,
certificate_chain=ROOT_CERTIFICATE,
)
# grpcAddress = "http://localhost"
grpcAddress = "http://host.docker.internal"
grpcFull = grpcAddress + ":9090"
with grpc.secure_channel(grpcFull, credential) as channel:
stub = base_pb2_grpc.ProjectStub(channel)
request = base_pb2.ContainerId(id=int(info), status=status)
try:
response = stub.ContainerStatus(request)
except grpc.RpcError as rpc_error:
logger.error("Error #STATUS_UPDATE")
if rpc_error.code() == grpc.StatusCode.CANCELLED:
logger.error("RPC Request got cancelled")
elif rpc_error.code() == grpc.StatusCode.UNAVAILABLE:
logger.error("RPC Target is unavaiable")
else:
logger.error(
f"Unknown RPC error: code={rpc_error.code()} message={rpc_error.details()}"
)
raise ConnectionError(rpc_error.code())
else:
logger.info(f"Received message: {response.message}")
return
Docker-compose.yaml
version: "3.9"
services:
test-flask:
image: me/test-flask
container_name: test-flask
restart: "no"
env_file: .env
ports:
- 0.0.0.0:8010:8010
command: python3 -m flask run --host=0.0.0.0 --port=8010
I am using the following python code to create a service group
import json
import requests
url = 'http://localhost:4041/iot/services'
headers = {'Content-Type': "application/json", 'fiware-service': "openiot", 'fiware-servicepath': "/mtp"}
data = {
"services": [
{
"apikey": "456dgffdg56465dfg",
"cbroker": "http://orion:1026",
"entity_type": "Door",
#resource attribute is left blank since HTTP communication is not being used
"resource": ""
}
]
}
res = requests.post(url, json=data, headers=headers)
#print(res.status_code)
if res.status_code == 201:
print("Created")
elif res.status_code == 409:
print("A resource cannot be created because it already exists")
else:
print (res.raise_for_status())
But when trying to provision an actuator I get a bad request 400 error with the code below:
import json
import requests
url = 'http://localhost:4041/iot/devices'
headers = {'Content-Type': "application/json", 'fiware-service': "openiot", 'fiware-servicepath': "/mtp"}
data = {
"devices": [
{
"device_id": "door003",
"entity_name": "urn:ngsi-ld:Door:door003",
"entity_type": "Door",
"protocol": "PDI-IoTA-UltraLight",
"transport": "MQTT",
"commands": [
{"name": "unlock","type": "command"},
{"name": "open","type": "command"},
{"name": "close","type": "command"},
{"name": "lock","type": "command"}
],
"attributes": [
{"object_id": "s", "name": "state", "type":"Text"}
]
}
]
}
res = requests.post(url, json=data, headers=headers)
#print(res.status_code)
if res.status_code == 201:
print("Created")
elif res.status_code == 409:
print("Entity cannot be created because it already exists")
else:
print (res.raise_for_status())
Here is the error message I get in console.
iot-agent | time=2021-02-17T11:39:44.132Z | lvl=DEBUG | corr=16f27639-49c2-4419-a926-2433805dbdb3 | trans=16f27639-49c2-4419-a926-2433805dbdb3 | op=IoTAgentNGSI.GenericMiddlewares | from=n/a | srv=smartdoor | subsrv=/mtp | msg=Error [BAD_REQUEST] handling request: Request error connecting to the Context Broker: 501 | comp=IoTAgent
iot-agent | time=2021-02-17T11:39:44.133Z | lvl=DEBUG | corr=390f5530-f537-4efa-980a-890a44153811 | trans=390f5530-f537-4efa-980a-890a44153811 | op=IoTAgentNGSI.DomainControl | from=n/a | srv=smartdoor | subsrv=/mtp | msg=response-time: 29 | comp=IoTAgent
What is strange is that if a remove the commands from the payload the device provisioning works fine. Is there something am I doing wrong while trying to provision an actuator (not a sensor)?
IoT Agent version:
{"libVersion":"2.14.0-next","port":"4041","baseRoot":"/","version":"1.15.0-next"}
Orion version:
{
"orion" : {
"version" : "2.2.0",
"uptime" : "0 d, 0 h, 59 m, 18 s",
"git_hash" : "5a46a70de9e0b809cce1a1b7295027eea0aa757f",
"compile_time" : "Thu Feb 21 10:28:42 UTC 2019",
"compiled_by" : "root",
"compiled_in" : "442fc4d225cf",
"release_date" : "Thu Feb 21 10:28:42 UTC 2019",
"doc" : "https://fiware-orion.rtfd.io/en/2.2.0/"
}
}
My docker-compose file looks as follows:
iot-agent:
image: fiware/iotagent-ul:latest
hostname: iot-agent
container_name: iot-agent
restart: unless-stopped
depends_on:
- mongo-db
networks:
- default
expose:
- "4041"
ports:
- "4041:4041"
environment:
- IOTA_CB_HOST=orion
- IOTA_CB_PORT=1026
- IOTA_NORTH_PORT=4041
- IOTA_REGISTRY_TYPE=mongodb
- IOTA_LOG_LEVEL=DEBUG
- IOTA_TIMESTAMP=true
- IOTA_CB_NGSI_VERSION=v2
- IOTA_AUTOCAST=true
- IOTA_MONGO_HOST=mongo-db
- IOTA_MONGO_PORT=27017
- IOTA_MONGO_DB=iotagentul
- IOTA_PROVIDER_URL=http://iot-agent:4041
- IOTA_MQTT_HOST=mosquitto
- IOTA_MQTT_PORT=1883
Thanks in advance.
Regards,
I am trying to pass json in an environmental variable of a systemd unit file with terraform. I am using an external provider named CT to generate ignition from the YAML configuration.
CT Config:
data "ct_config" "ignition" {
# Reference: https://github.com/poseidon/terraform-provider-ct/
content = data.template_file.bastion_user_data.rendered
strict = true
pretty_print = false
}
Error:
Error: error converting to Ignition: error at line 61, column 17
invalid unit content: unexpected newline encountered while parsing option name
on ../../modules/example/launch-template.tf line 91, in data "ct_config" "ignition":
91: data "ct_config" "ignition" {
Unit File Content:
- name: cw-agent.service
enabled: true
contents: |
[Unit]
Description=Cloudwatch Agent Service
Documentation=https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html
Requires=docker.socket
After=docker.socket
[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Environment=CONFIG=${cw-agent-config}
ExecStartPre=-/usr/bin/docker kill cloudwatch-agent
ExecStartPre=-/usr/bin/docker rm cloudwatch-agent
ExecStartPre=/usr/bin/docker pull amazon/cloudwatch-agent
ExecStart=/usr/bin/docker run --name cloudwatch-agent \
--net host \
--env CW_CONFIG_CONTENT=$CONFIG \
amazon/cloudwatch-agent
ExecStop=/usr/bin/docker stop cloudwatch-agent
[Install]
WantedBy=multi-user.target
Rendering:
data "template_file" "cw_agent_config" {
template = file("${path.module}/../cw-agent-config.json")
}
cw-agent-config = indent(10, data.template_file.cw_agent_config.rendered)
File Content:
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "cwagent"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "$${aws:AutoScalingGroupName}",
"ImageId": "$${aws:ImageId}",
"InstanceId": "$${aws:InstanceId}",
"InstanceType": "$${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"drop_device" : true,
"measurement": [
"used_percent"
],
"metrics_collection_interval": 120,
"resources": [
"/"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 120
}
}
}
}
I need this json file to be available as a value of environment variable named CW_CONFIG_CONTENT inside a docker container.
This was solved by using the Terraform jsonencode function.
cw-agent-config = jsonencode(data.template_file.cw_agent_config.rendered)
Here is my code on the bottom is the ERROR:
This is my config file:
[MQTT]
userMQTT = /
passwdMQTT = /
hostMQTT = broker.hivemq.com
poerMQTT = 1883
Above is mine config file where i used public broker. And now the rest of the code:
import configparser
from time import localtime, strftime
import json
import paho.mqtt.client as mqtt
config = configparser.ConfigParser()
config.read('/home/pi/bin/py.conf') # Broker connection config.
requestTopic = 'services/timeservice/request/+' # Request comes in
here. Note wildcard.
responseTopic = 'services/timeservice/response/' # Response goes
here. Request ID will be appended later
def onConnect(client, userdata, flags, rc):
print("Connected with result code " + str(rc))
def onMessage(client, userdata, message):
requestTopic = message.topic
requestID = requestTopic.split('/')[3] # obtain requestID as last
field from the topic
print("Received a time request on topic " + requestTopic + ".")
lTime = strftime('%H:%M:%S', localtime())
client.publish((responseTopic + requestID), payload=lTime, qos=0,
retain=False)
def onDisconnect(client, userdata, message):
print("Disconnected from the broker.")
# Create MQTT client instance
mqttc = mqtt.Client(client_id='raspberrypi', clean_session=True)
mqttc.on_connect = onConnect
mqttc.on_message = onMessage
mqttc.on_disconnect = onDisconnect
# Connect to the broker
mqttc.username_pw_set(config['MQTT']['userMQTT'], password=config['MQTT']
['passwdMQTT'])
BUT after i type:
mqttc.connect(config['MQTT']['hostMQTT'], port=int(config['MQTT']
['portMQTT']), keepalive=60, bind_address="")
I get an ERROR:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/paho/mqtt/client.py", line
768, in connect
return self.reconnect()
File "/usr/local/lib/python3.5/dist-packages/paho/mqtt/client.py", line
895, in reconnect
sock = socket.create_connection((self._host, self._port), source_address=
(self._bind_address, 0))
File "/usr/lib/python3.5/socket.py", line 694, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/lib/python3.5/socket.py", line 733, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
I tried putting an address in bind_address="" but i keep getting the same ERROR. I tried putting bind_address="0.0.0.0" or my local address.
Your config file contains:
poerMQTT = 1883
Where as your code is accessing:
portMQTT