Hyperledger Composer - NetworkAdmin#admin has no READ access to network - docker

Upon following this tutorial https://medium.freecodecamp.org/how-to-build-a-blockchain-network-using-hyperledger-fabric-and-composer-e06644ff801d
When I use the command:
composer network start --networkName my-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file my-network-admin.card
I successfully create the card and import it using the following command:
composer card import --file my-network-admin.card
However, the problem is using the following command:
composer network ping --card admin#my-network
I receive the following error:
transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#my-network#0.0.1'
Command failed
I've looked at the documentation and tried restarting the entire process a couple of times to no avail. I even tried adding the following to my permissions.acl file:
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
rule Default {
description: "Grant all access by default"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
EDIT:
When I run composer card list -c admin#my-network, I get the following:
userName: admin
description:
businessNetworkName: my-network
identityId: fc63d3e4b3b3d73a2be2943a0c422e7af862207f9489fc1ce3707e8769efc99b
roles:
- PeerAdmin
connectionProfile:
name: hlfv1
x-type: hlfv1
credentials: Credentials set
Command succeeded

Related

Apache Flume agent not starting but not showing error

I'm attempting to run an Apache Flume agent from an AWS EC2 cluster but when I start the agent, it neither starts nor throws an obvious error.
I'm just starting with the simple example from Apache's documentation.
When I run:
ubuntu#ip-172-31-41-5:~/Flume$ ./bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=DEBUG,console
The console output is the following:
Info: Sourcing environment configuration script /home/ubuntu/Flume/conf/flume-env.sh
Info: Including HBASE libraries found via (/home/ubuntu/hbase-2.4.4/bin/hbase) for HBASE access
Info: Including Hive libraries found via () for Hive access
+ exec /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Xmx20m -Dflume.root.logger=DEBUG,console -Dflume.root.logger=DEBUG,console -cp '/home/ubuntu/Flume/conf:/home/ubuntu/Flume/lib/*:/home/ubuntu/hbase-2.4.4/conf:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar:/home/ubuntu/hbase-2.4.4:/home/ubuntu/hbase-2.4.4/lib/shaded-clients/hbase-shaded-client-2.4.4.jar:/home/ubuntu/hbase-2.4.4/lib/client-facing-thirdparty/audience-annotations-0.5.0.jar:/home/ubuntu/hbase-2.4.4/lib/client-facing-thirdparty/commons-logging-1.2.jar:/home/ubuntu/hbase-2.4.4/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/home/ubuntu/hbase-2.4.4/lib/client-facing-thirdparty/log4j-1.2.17.jar:/home/ubuntu/hbase-2.4.4/lib/client-facing-thirdparty/slf4j-api-1.7.30.jar:/home/ubuntu/hbase-2.4.4/conf:/lib/*' -Djava.library.path=:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib org.apache.flume.node.Application --conf-file example.conf --name a1 ./bin/flume-ng agent --conf-file example.conf --name a1
The agent doesn't throw an error but never gets further than this. I have also tried some variations including --conf-file conf/example.conf
Flume and Java appear to be installed correctly:
Flume
ubuntu#ip-172-31-41-5:~/Flume$ ./bin/flume-ng version
Source code repository: https://git.apache.org/repos/asf/flume.git
Revision: 1a15927e594fd0d05a59d804b90a9c31ec93f5e1
Compiled by rgoers on Sun Oct 16 14:44:15 MST 2022
From source with checksum bbbca682177262aac3a89defde369a37
Java
ubuntu#ip-172-31-41-5:~/Flume$ java -version
openjdk version "11.0.17" 2022-10-18
OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)
OpenJDK 64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu222.04, mixed mode, sharing)
Example.conf
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
Flume.env
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# If this file is placed at FLUME_CONF_DIR/flume-env.sh, it will be sourced
# during Flume startup.
# Enviroment variables can be set here.
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
# Give Flume more memory and pre-allocate, enable remote monitoring via JMX
# export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"
# Let Flume write raw event data and configuration information to its log files for debugging
# purposes. Enabling these flags is not recommended in production,
# as it may result in logging sensitive user information or encryption secrets.
# export JAVA_OPTS="$JAVA_OPTS -Dorg.apache.flume.log.rawdata=true -Dorg.apache.flume.log.printconfig=true "
# Note that the Flume conf directory is always included in the classpath.
#FLUME_CLASSPATH=""
The only clue that I have is is in flume.log which shows the following error. I've even copied example.conf into the main Flume directory but it doesn't seem to make a difference.
03 Dec 2022 21:10:03,538 ERROR [main] (org.apache.flume.node.Application.main:506) - A fatal error occurred while running. Exception follows.org.apache.flume.conf.ConfigurationException: Unable to read file /home/ubuntu/Flume/example.confat org.apache.flume.node.FileConfigurationSource.<init>(FileConfigurationSource.java:52) ~[flume-ng-node-1.11.0.jar:1.11.0]at org.apache.flume.node.FileConfigurationSourceFactory.createConfigurationSource(FileConfigurationSourceFactory.java:40) ~[flume-ng-node-1.11.0.jar:1.11.0]at org.apache.flume.node.ConfigurationSourceFactory.getConfigurationSource(ConfigurationSourceFactory.java:39) ~[flume-ng-node-1.11.0.jar:1.11.0]at org.apache.flume.node.Application.main(Application.java:476) ~[flume-ng-node-1.11.0.jar:1.11.0]Caused by: java.nio.file.NoSuchFileException: /home/ubuntu/Flume/example.confat sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[?:1.8.0_352]at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_352]at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:1.8.0_352]at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:1.8.0_352]at java.nio.file.Files.newByteChannel(Files.java:361) ~[?:1.8.0_352]at java.nio.file.Files.newByteChannel(Files.java:407) ~[?:1.8.0_352]at java.nio.file.Files.readAllBytes(Files.java:3152) ~[?:1.8.0_352]at org.apache.flume.node.FileConfigurationSource.<init>(FileConfigurationSource.java:49) ~[flume-ng-node-1.11.0.jar:1.11.0]... 3 more

Send NGSIv2 data to Orion Context Broker

I explain the problem. I need to register a client to an Orion context broker. The client (OMALWM2M) is connected to the IoT Agent which acts as a bridge with NGSI. My problem is that when I connect to localhost: 1026 / v2 / entities there is no client that I connected. I ask you to look at my conifgurations of the IoT Agent and the Context broker to see where I am wrong. Thank you.
Orion context Broker:
docker-compose.yml
version: "3"
services:
orion:
image: fiware/orion
ports:
- "1026:1026"
depends_on:
- mongo
command: -dbhost mongo
mongo:
image: mongo:4.4
command: --nojournal
Fiware IoT Agent
config.js
/*
* Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
*
* This file is part of fiware-iotagent-lib
*
* fiware-iotagent-lib is free software: you can redistribute it and/or
* modify it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the License,
* or (at your option) any later version.
*
* fiware-iotagent-lib is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public
* License along with fiware-iotagent-lib.
* If not, seehttp://www.gnu.org/licenses/.
*
* For those usages not covered by the GNU Affero General Public License
* please contact with::[contacto#tid.es]
*/
var config = {};
config.lwm2m = {
logLevel: 'DEBUG',
port: 5683,
defaultType: 'Device',
ipProtocol: 'udp4',
serverProtocol: 'udp4',
/**
* When a LWM2M client has active attributes, the IoT Agent sends an observe instruction for each one, just after the
* client registers. This may cause cause an error when the client takes too long to start listening, as the
* observe requests may not reach its destiny. This timeout (ms) is used to give the client the opportunity to
* create the listener before the server sends the requests.
*/
delayedObservationTimeout: 50,
formats: [
{
name: 'application-vnd-oma-lwm2m/text',
value: 1541
},
{
name: 'application-vnd-oma-lwm2m/tlv',
value: 1542
},
{
name: 'application-vnd-oma-lwm2m/json',
value: 1543
},
{
name: 'application-vnd-oma-lwm2m/opaque',
value: 1544
}
],
writeFormat: 'application-vnd-oma-lwm2m/text',
types: []
};
config.ngsi = {
logLevel: 'DEBUG',
timestamp: true,
contextBroker:{
host: 'localhost',
port: '1026',
ngsiVersion: 'v2'
},
server: {
port: 59441
},
deviceRegistry: {
//type: 'memory'
type: 'mongodb'
},
mongodb: {
host: 'localhost',
port: '27017',
db: 'iotagentlm2m'
//replicaSet: ''
},
types: {},
service: 'smartGondor',
subservice: '/gardens',
providerUrl: 'http://localhost:4041',
deviceRegistrationDuration: 'P1Y',
defaultType: 'Thing'
};
/**
* Configuration for secured access to instances of the Context Broker secured with a PEP Proxy.
* For the authentication mechanism to work, the authentication attribute in the configuration has to be fully
* configured, and the authentication.enabled subattribute should have the value `true`.
*
* The Username and password should be considered as sensitive data and should not be stored in plaintext.
* Either encrypt the config and decrypt when initializing the instance or use environment variables secured by
* docker secrets.
*/
// config.authentication: {
//enabled: false,
/**
* Type of the Identity Manager which is used when authenticating the IoT Agent.
* Either 'oauth2' or 'keystone'
*/
//type: 'keystone',
/**
* Name of the additional header passed to retrieve the identity of the IoT Agent
*/
//header: 'Authorization',
/**
* Hostname of the Identity Manager.
*/
//host: 'localhost',
/**
* Port of the Identity Manager.
*/
//port: '5000',
/**
* URL of the Identity Manager - a combination of the above
*/
//url: 'localhost:5000',
/**
* KEYSTONE ONLY: Username for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//user: 'IOTA_AUTH_USER',
/**
* KEYSTONE ONLY: Password for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//password: 'IOTA_AUTH_PASSWORD',
/**
* OAUTH2 ONLY: URL path for retrieving the token
*/
//tokenPath: '/oauth2/token',
/**
* OAUTH2 ONLY: Flag to indicate whether or not the token needs to be periodically refreshed.
*/
//permanentToken: true,
/**
* OAUTH2 ONLY: ClientId for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//clientId: 'IOTA_AUTH_CLIENT_ID',
/**
* OAUTH2 ONLY: ClientSecret for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//clientSecret: 'IOTA_AUTH_CLIENT_SECRET'
//};
/**
* flag indicating whether the node server will be executed in multi-core option (true) or it will be a
* single-thread one (false).
*/
// config.multiCore= true;
module.exports = config;

Flink Statefun netty DisconnectedException

I am trying to run a flink statefun (version 3.2.0) application on my local machine using docker, with a single task manager and a single job manager. The application is a pipeline of multiple services that communicate to each other via sending messages through Kafka to HTTP function endpoints using aiohttp with gunicorn. At the beginning of the pipeline is a service that pulls results from Amazon S3 and sends them to the rest of pipeline, at a rate of about 8000-10000 requests per minute.
When I run it, it at first runs successfully, but looking at the docker logs for the flink worker (task manager) container, I repeatedly see these warnings:
2022-03-18 17:35:43,315 WARN org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception caught while trying to deliver a message: (attempt #0)ToFunctionRequestSummary(address=Address(analytics-transformer, dispatch, 77ce0dcb-347c-4c03-bc32-f7ebb734b930), batchSize=1, totalSizeInBytes=1323, numberOfStates=0)
org.apache.flink.statefun.flink.core.nettyclient.exceptions.DisconnectedException: Disconnected
18:25:27,594 WARN org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception caught while trying to deliver a message: (attempt #0)ToFunctionRequestSummary(address=Address(web, statefun, 82936819-b3d9-4a24-b4eb-81a189d6306c), batchSize=1, totalSizeInBytes=1434, numberOfStates=0)
org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
Eventually I see this warning as well:
2022-03-18 18:06:44,848 WARN org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception caught while trying to deliver a message: (attempt #0)ToFunctionRequestSummary(address=Address(web, statefun, f004409f-77be-433c-8ab1-ae5f9dad605c), batchSize=1, totalSizeInBytes=1172, numberOfStates=0)
java.lang.IllegalStateException: FixedChannelPool was closed
And after some time the flink master fails due to request timeout and has to restart:
org.apache.flink.statefun.flink.core.functions.StatefulFunctionInvocationException: An error occurred when attempting to invoke function FunctionType(analytics-transformer, dispatch).
at org.apache.flink.statefun.flink.core.functions.StatefulFunction.receive(StatefulFunction.java:50) ~[statefun-flink-core.jar:3.2.0]
at org.apache.flink.statefun.flink.core.functions.ReusableContext.apply(ReusableContext.java:74) ~[statefun-flink-core.jar:3.2.0]
at org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.processNextEnvelope(LocalFunctionGroup.java:60) ~[statefun-flink-core.jar:3.2.0]
at org.apache.flink.statefun.flink.core.functions.Reductions.processEnvelopes(Reductions.java:164) ~[statefun-flink-core.jar:3.2.0]
at org.apache.flink.statefun.flink.core.functions.AsyncSink.drainOnOperatorThread(AsyncSink.java:119) ~[statefun-flink-core.jar:3.2.0]
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:338) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:324) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) ~[flink-dist_2.12-1.14.3.jar:1.14.3]
at java.lang.Thread.run(Unknown Source) ~[?:?]
Caused by: java.lang.IllegalStateException: Failure forwarding a message to a remote function Address(analytics-transformer, dispatch, 77d07eb3-f499-4265-a456-b0f75d738830)
at org.apache.flink.statefun.flink.core.reqreply.RequestReplyFunction.onAsyncResult(RequestReplyFunction.java:170) ~[statefun-flink-core.jar:3.2.0]
at org.apache.flink.statefun.flink.core.reqreply.RequestReplyFunction.invoke(RequestReplyFunction.java:124) ~[statefun-flink-core.jar:3.2.0]
at org.apache.flink.statefun.flink.core.functions.StatefulFunction.receive(StatefulFunction.java:48) ~[statefun-flink-core.jar:3.2.0]
... 16 more
Caused by: org.apache.flink.statefun.flink.core.nettyclient.exceptions.RequestTimeoutException
I am guessing this is a load issue due to the number of incoming requests, where the worker is unable to handle them all. This is what I have configured for each of the HTTP function endpoints in the module.yaml:
spec:
functions: <function>
urlPathTemplate: <url>
transport:
type: io.statefun.transports.v1/async
call: 15min
connect: 15min
pool_ttl: 45s
pool_size: 1024
payload_max_bytes: 33554432
I find that decreasing the pool size to a small value like 20 reduces the number of warnings, but then later on I see this warning a lot:
2022-03-18 15:44:52,566 WARN org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception caught while trying to deliver a message: (attempt #0)ToFunctionRequestSummary(address=Address(analytics-transformer, dispatch, 7facef98-b659-442e-846f-4e4d45559555), batchSize=1, totalSizeInBytes=739, numberOfStates=0)
org.apache.flink.shaded.netty4.io.netty.channel.pool.FixedChannelPool$AcquireTimeoutException: Acquire operation took longer then configured maximum time
which I'm assuming means that the connections were not able to form before the timeout due to the small pool size compared to the large number of requests.
Here is the flink-conf.yaml:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is the base for the Apache Flink configuration
statefun.flink-job-name: Statefun Application
#==============================================================================
# Configurations strictly required by Stateful Functions. Do not change.
#==============================================================================
classloader.parent-first-patterns.additional: org.apache.flink.statefun;org.apache.kafka;com.google.protobuf
#==============================================================================
# Fault tolerance, checkpointing and recovery.
# For more related configuration options, please see: https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#fault-tolerance
#==============================================================================
# Uncomment the below to enable checkpointing for your application
#execution.checkpointing.mode: EXACTLY_ONCE
#execution.checkpointing.interval: 5sec
restart-strategy: fixed-delay
restart-strategy.fixed-delay.attempts: 2147483647
restart-strategy.fixed-delay.delay: 1sec
state.backend.local-recovery: true
state.backend: rocksdb
state.backend.rocksdb.timer-service.factory: ROCKSDB
state.backend.rocksdb.localdir: /local/state/rocksdb
state.backend.rocksdb.memory.partitioned-index-filters: true
state.backend.rocksdb.checkpoint.transfer.thread.num: 8
state.backend.rocksdb.thread.num: 4
state.checkpoints.dir: file:///checkpoint-dir
state.backend.incremental: true
taskmanager.state.local.root-dirs: file:///local/state/recovery
#==============================================================================
# Recommended memory configurations. Users may change according to their needs.
#==============================================================================
jobmanager.memory.process.size: 1g
taskmanager.memory.process.size: 4g
#==============================================================================
# Support easy upgrades as the module.yaml file updates
#==============================================================================
pipeline.auto-generate-uids: false
execution.savepoint.ignore-unclaimed-state: true
statefun.async.max-per-task: 163840
execution.checkpointing.mode: EXACTLY_ONCE
execution.checkpointing.interval: 5sec
I have also tried increasing the number of gunicorn workers and setting taskmanager.network.netty.server.numThreads to 100 in the flink-conf.yaml, but this does not seem to fix the issue.

DB Ownership process error running phoenix test in containerized elixir 1.6.1

I have an umbrella project compose by:
gateway - Phoenix application
core - business model layer notifications a dedicated app for delivering sms, email, etc… users
user management, role system, and authentication.
The three components are connected via AMQP, so they can send and receive messages between them.
We are using Docker, and drone.io hosted on google cloud’s kubernetes engine. So, what is happening is that randomly the application raises the following exception while running my tests:
{\"status\":\"error\",\"message\":\"%DBConnection.OwnershipError{message: \\\"cannot find ownership process for #PID<0.716.0>.\\\\n\\\\nWhen using ownership, you must manage connections in one\\\\nof the four ways:\\\\n\\\\n* By explicitly checking out a connection\\\\n* By explicitly allowing a spawned process\\\\n* By running the pool in shared mode\\\\n* By using :caller option with allowed process\\\\n\\\\nThe first two options require every new process to explicitly\\\\ncheck a connection out or be allowed by calling checkout or\\\\nallow respectively.\\\\n\\\\nThe third option requires a {:shared, pid} mode to be set.\\\\nIf using shared mode in tests, make sure your tests are not\\\\nasync.\\\\n\\\\nThe fourth option requires [caller: pid] to be used when\\\\nchecking out a connection from the pool. The caller process\\\\nshould already be allowed on a connection.\\\\n\\\\nIf you are reading this error, it means you have not done one\\\\nof the steps above or that the owner process has crashed.\\\\n\\\\nSee Ecto.Adapters.SQL.Sandbox docs for more information.\\\"}\",\"code\":0}"
It’s a Json since we exchange amqp messages in our tests, for instance:
# Given the module FindUserByEmail
def Users.Services.FindUserByEmail do
use GenAMQP.Server, event: "find_user_by_email", conn_name:
Application.get_env(:gen_amqp, :conn_name)
alias User.Repo
alias User.Models.User, as: UserModel
def execute(payload) do
%{ "email" => email } = Poison.decode!(payload)
resp =
case Repo.get_by(UserModel, email: email) do
%UserModel{} = user -> %{status: :ok, response: user}
nil -> ErrorHelper.err(2011)
true -> ErrorHelper.err(2012)
{:error, %Ecto.Changeset{} = changeset} -> ViewHelper.translate_errors(changeset)
end
{:reply, Poison.encode!(resp)}
end
end
# and the test
defmodule Users.Services.FindUserByEmailTest do
use User.ModelCase
alias GenAMQP.Client
def execute(payload) do
resp = Client.call_with_conn(#conn_name, "find_user_by_email", payload)
data = Poison.decode!(resp, keys: :atoms)
assert data.status == "ok"
end
end
Following details my .drone.yaml file:
pipeline:
unit-tests:
image: bitwalker/alpine-elixir-phoenix:1.6.1
environment:
RABBITCONN: amqp://user:pass#localhost:0000/unit_testing
DATABASE_URL: ecto://username:password#postgres/testing
commands:
- mix local.hex --force && mix local.rebar --force
- mix deps.get
- mix compile --force
- mix test
mix.exs file in every app contains the following aliases
defp aliases do
[
"test": ["ecto.create", "ecto.migrate", "test"]
]
end
All ours model_case files contain this configuration:
setup tags do
:ok = Sandbox.checkout(User.Repo)
unless tags[:async] do
Sandbox.mode(User.Repo, {:shared, self()})
end
:ok
end
How can I debug this? it only occurs while testing code inside the container. Would the issue be related to container’s resources?
Any tip or hint would be appreciated.
Try setting onwership_timeout and timeout to a large numbers in your config/tests.exs
config :app_name, User.Repo,
adapter: Ecto.Adapters.Postgres,
username: ...,
password: ...,
database: ...,
hostname: ...,
pool: Ecto.Adapters.SQL.Sandbox,
timeout: 120_000, # i think the default is 5000
pool_timeout: 120_000,
ownership_timeout: 120_000 #i think the default is 5000

Can't deploy .bna network definition to Bluemix, multiple errors

I'm trying to deploy car auction sample .bna file to HLF v0.6 service on Bluemix and getting different errors.
My connection profile for Bluemix:
{
"type": "hlf",
"membershipServicesURL": "grpcs://1c0b2dabbb834804ae3d284fed9059f4-ca.us.blockchain.ibm.com:30002",
"peerURL": "grpcs://1c0b2dabbb834804ae3d284fed9059f4-vp0.us.blockchain.ibm.com:30002",
"eventHubURL": "grpcs://1c0b2dabbb834804ae3d284fed9059f4-vp0.us.blockchain.ibm.com:31002",
"keyValStore": "/Users/me/.composer-credentials",
"deployWaitTime": "3000",
"invokeWaitTime": "1000",
"certificate": "-----BEGIN CERTIFICATE-----\nMIID6TCCA26gAwIBAgIQCiYEWw1faoRpM2xufaiPLTAKBggqhkjOPQQDAjBMMQsw\nCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMSYwJAYDVQQDEx1EaWdp\nQ2VydCBFQ0MgU2VjdXJlIFNlcnZlciBDQTAeFw0xNjA2MDcwMDAwMDBaFw0xOTA2\nMTIxMjAwMDBaMIGJMQswCQYDVQQGEwJVUzERMA8GA1UECBMITmV3IFlvcmsxDzAN\nBgNVBAcTBkFybW9uazE0MDIGA1UEChMrSW50ZXJuYXRpb25hbCBCdXNpbmVzcyBN\nYWNoaW5lcyBDb3Jwb3JhdGlvbjEgMB4GA1UEAwwXKi51cy5ibG9ja2NoYWluLmli\nbS5jb20wWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARTKAZypDOqw34HWujQeL82\nj1e9rN1inpN6ngrq49+OpYIe8ckHnJhsWPpf+zeIQePboDQVUTDtYXh7212BsVoX\no4IB8jCCAe4wHwYDVR0jBBgwFoAUo53mH/naOU/AbuiRy5Wl2jHiCp8wHQYDVR0O\nBBYEFK+1RoBnUnb8nr2hNtkUu3FRrbYuMDkGA1UdEQQyMDCCFyoudXMuYmxvY2tj\naGFpbi5pYm0uY29tghV1cy5ibG9ja2NoYWluLmlibS5jb20wDgYDVR0PAQH/BAQD\nAgeAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjBpBgNVHR8EYjBgMC6g\nLKAqhihodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vc3NjYS1lY2MtZzEuY3JsMC6g\nLKAqhihodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vc3NjYS1lY2MtZzEuY3JsMEwG\nA1UdIARFMEMwNwYJYIZIAYb9bAEBMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3\nLmRpZ2ljZXJ0LmNvbS9DUFMwCAYGZ4EMAQICMHsGCCsGAQUFBwEBBG8wbTAkBggr\nBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQuY29tMEUGCCsGAQUFBzAChjlo\ndHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGlnaUNlcnRFQ0NTZWN1cmVTZXJ2\nZXJDQS5jcnQwDAYDVR0TAQH/BAIwADAKBggqhkjOPQQDAgNpADBmAjEA7LViaN74\nOwIp/zqfwSRvURg965+m73/edCeNKrsLf6GuE0sLwpX6pQNnDlr6SzGnAjEA+qk0\nsYRnd2gCQeD9fWbCJIw0vJDqeZr1WJ64aVoJ8kyASzY/yoarSm2wqujXJwEf\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDrDCCApSgAwIBAgIQCssoukZe5TkIdnRw883GEjANBgkqhkiG9w0BAQwFADBh\nMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\nd3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD\nQTAeFw0xMzAzMDgxMjAwMDBaFw0yMzAzMDgxMjAwMDBaMEwxCzAJBgNVBAYTAlVT\nMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxJjAkBgNVBAMTHURpZ2lDZXJ0IEVDQyBT\nZWN1cmUgU2VydmVyIENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE4ghC6nfYJN6g\nLGSkE85AnCNyqQIKDjc/ITa4jVMU9tWRlUvzlgKNcR7E2Munn17voOZ/WpIRllNv\n68DLP679Wz9HJOeaBy6Wvqgvu1cYr3GkvXg6HuhbPGtkESvMNCuMo4IBITCCAR0w\nEgYDVR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAYYwNAYIKwYBBQUHAQEE\nKDAmMCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wQgYDVR0f\nBDswOTA3oDWgM4YxaHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0R2xv\nYmFsUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggrBgEFBQcCARYc\naHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUo53mH/naOU/A\nbuiRy5Wl2jHiCp8wHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUwDQYJ\nKoZIhvcNAQEMBQADggEBAMeKoENL7HTJxavVHzA1Nm6YVntIrAVjrnuaVyRXzG/6\n3qttnMe2uuzO58pzZNvfBDcKAEmzP58mrZGMIOgfiA4q+2Y3yDDo0sIkp0VILeoB\nUEoxlBPfjV/aKrtJPGHzecicZpIalir0ezZYoyxBEHQa0+1IttK7igZFcTMQMHp6\nmCHdJLnsnLWSB62DxsRq+HfmNb4TDydkskO/g+l3VtsIh5RHFPVfKK+jaEyDj2D3\nloB5hWp2Jp2VDCADjT7ueihlZGak2YPqmXTNbk19HOuNssWvFhtOyPNV6og4ETQd\nEa8/B6hPatJ0ES8q/HO3X8IVQwVs1n3aAr0im0/T+Xc=\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh\nMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\nd3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD\nQTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT\nMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j\nb20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG\n9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB\nCSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97\nnh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt\n43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P\nT19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4\ngdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO\nBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR\nTLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw\nDQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr\nhMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg\n06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF\nPnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls\nYSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk\nCAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=\n-----END CERTIFICATE-----\n",
"certificatePath": "/certs/peer/cert.pem"
}
I'm executing the following command:
composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s PASSS
I tried this many times and I'm getting one of the following errors:
I. Security handshake:
$ composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s 97b116b3c4
Deploying business network from archive: carauction-network/carauction-network#0.0.7.bna
Business network definition:
Identifier: carauction-network#0.0.7
Description: Car Auction Business Network
E0528 10:59:18.962200000 123145570217984 handshake.c:128]
Security handshake failed:
{"created":"#1495940358.962177000","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1495940358.962172000","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_poll_posix.c","file_line":427}]}
Error
Command failed
II. Unhandled 'error' event:
$ composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s 97b116b3c4
Deploying business network from archive: carauction-network/carauction-network#0.0.7.bna
Business network definition:
Identifier: carauction-network#0.0.7
Description: Car Auction Business Network
events.js:160
throw er; // Unhandled 'error' event
^
Error: unknown service protos.Events
at ClientDuplexStream._emitStatusIfDone
(/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._receiveStatus
(/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:169:8)
at /usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:634:14
III. Identity or token does not match:
$ composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s 97b116b3c4
Deploying business network from archive: carauction-network/carauction-network#0.0.7.bna
Business network definition:
Identifier: carauction-network#0.0.7
Description: Car Auction Business Network
Error: Identity or token does not match.
Command failed
I feel "SSL Handshake problem" (I) and "Unhandled 'error' event" (II) are related to the old issue with HFC not handling properly GRPC disconnects Is it correct?. What I can't figure out is what's causing "Identity or token does not match" (III). My current guess is that admin user does not have a wallet created yet (can't see it in my ~/.composer-credentials folder). Is composer deploy supposed to create wallet automatically if it does not yet exists?
Ok, I did some more experiments, and here is what I've learned:
It was a problem in my profile's connection.json. When I copied and modified one from the answer to this question: Fabric composer integration with Bluemix blockchain service it start working.
I was setting long timeouts in connection.json, but CLI command still ends with the following error:
events.js:160
throw er; // Unhandled 'error' event
^
Error: {"created":"#1496109180.720017000","description":"Secure read failed","file":"../src/core/lib/security/transport/secure_endpoint.c","file_line":157,"grpc_status":14,"referenced_errors":[{"created":"#1496109180.720007000","description":"OS Error","errno":54,"file":"../src/core/lib/iomgr/tcp_posix.c","file_line":229,"os_error":"Connection reset by peer","syscall":"recvmsg"}]}
at ClientDuplexStream._emitStatusIfDone (/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._receiveStatus (/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:169:8)
at /usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:634:14
At the same time chaicode gets deployed. Still not sure what is causing it.
Since composer's deployment command is finished with error, the mapping between composer's network ID and deployed chaincode ID isn't added to. Which means, it needs to be added manually, by adding something like this to a respective connection.json:
"networks": {
"carauction-network": "8f637b9886357fb3e24864cfa36f9cdae84e587028a08074d856e9b6635afa76"
}

Resources