How to configure Corda network using docker (using YAML file) - docker

I'm struggling to configure my Corda network (which is really similar to https://github.com/corda/cordapp-example) using docker, I'm just missing examples with a real network like this, load cordapps is easy (all examples I found so far loads and yo sample like this one: https://github.com/corda/corda-docker/tree/master/plugins), but commit the transaction is something totally different (shouldn't be hard, I agree!).
In order to commit the transaction, consensus in the network and notary approval are needed, and if nodes in the network are not communicating both requirements are not possible.
I'm getting these logs from Xxxxx node when I try to commit a transation to the ledger (As you can see the transaction is mapped to reach localhost:10010 instead of expected xxxxxx:10010):
[WARN ] 2018-09-20T20:47:03,246Z [main] utilities.AppendOnlyPersistentMapBase.set - Double insert in net.corda.node.utilities.AppendOnlyPersistentMap for entity class class net.corda.node.services.identity.PersistentIdentityService$PersistentIdentity key E66540FF121D732F4417B293203D1C61F9F5A467A19AC21EE0327665BA0CA561, not inserting the second time {}
[INFO ] 2018-09-20T20:47:03,257Z [main] messaging.P2PMessagingClient.updateBridgesOnNetworkChange - Updating bridges on network map change: NodeInfo(addresses=[xxxxx:10002], legalIdentitiesAndCerts=[O=Xxxxx, L=New York, C=US], platformVersion=3, serial=1537476408266) {}
[INFO ] 2018-09-20T20:47:03,465Z [main] BasicInfo.printBasicNodeInfo - Loaded CorDapps : example-cordapp-0.1, corda-finance-3.2-corda, corda-core-3.2-corda {}
[INFO ] 2018-09-20T20:47:03,481Z [main] BasicInfo.printBasicNodeInfo - Node for "Xxxxx" started up and registered in 55.5 sec {}
[INFO ] 2018-09-20T20:47:03,486Z [main] messaging.RPCServer.start - Starting RPC server with configuration RPCServerConfiguration(rpcThreadPoolSize=4, reapInterval=PT1S, deduplicationCacheExpiry=PT24H) {}
[INFO ] 2018-09-20T20:47:04,278Z [Thread-0 (ActiveMQ-client-global-threads)] messaging.RPCServer.clientArtemisMessageHandler - SUBMITTING {actor_id=user1, actor_owningIdentity=O=Xxxxx, L=New York, C=US, actor_store_id=NODE_CONFIG, invocation_id=df81b170-c57d-4d2c-ac5e-c50b2dbc951d, invocation_timestamp=2018-09-20T20:47:04.249Z, session_id=84058489-40f2-4b91-9527-8e0cfe188294, session_timestamp=2018-09-20T20:46:55.554Z}
[INFO ] 2018-09-20T20:47:07,901Z [Thread-0 (ActiveMQ-client-global-threads)] messaging.RPCServer.clientArtemisMessageHandler - SUBMITTING {actor_id=user1, actor_owningIdentity=O=Xxxxx, L=New York, C=US, actor_store_id=NODE_CONFIG, invocation_id=abaaafe5-91a9-450f-9b04-078c4446697d, invocation_timestamp=2018-09-20T20:47:07.901Z, session_id=84058489-40f2-4b91-9527-8e0cfe188294, session_timestamp=2018-09-20T20:46:55.554Z}
[INFO ] 2018-09-20T20:47:08,572Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[localhost:10010], legalIdentitiesAndCerts=[O=Xxxxxx, L=New York, C=US], platformVersion=3, serial=1537381493186) {}
[INFO ] 2018-09-20T20:47:08,660Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - No previous node found {}
[INFO ] 2018-09-20T20:47:08,938Z [RxIoScheduler-2] messaging.P2PMessagingClient.updateBridgesOnNetworkChange - Updating bridges on network map change: NodeInfo(addresses=[localhost:10010], legalIdentitiesAndCerts=[O=Xxxxxx, L=New York, C=US], platformVersion=3, serial=1537381493186) {}
[INFO ] 2018-09-20T20:47:09,053Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Done adding node with info: NodeInfo(addresses=[localhost:10010], legalIdentitiesAndCerts=[O=Xxxxxx, L=New York, C=US], platformVersion=3, serial=1537381493186) {}
[INFO ] 2018-09-20T20:47:09,053Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[localhost:10013], legalIdentitiesAndCerts=[O=Xxxxxxx, L=New York, C=US], platformVersion=3, serial=1537381494853) {}
[INFO ] 2018-09-20T20:47:09,056Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - No previous node found {}
[INFO ] 2018-09-20T20:47:09,138Z [RxIoScheduler-2] messaging.P2PMessagingClient.updateBridgesOnNetworkChange - Updating bridges on network map change: NodeInfo(addresses=[localhost:10013], legalIdentitiesAndCerts=[O=Xxxxxxx, L=New York, C=US], platformVersion=3, serial=1537381494853) {}
[INFO ] 2018-09-20T20:47:09,161Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Done adding node with info: NodeInfo(addresses=[localhost:10013], legalIdentitiesAndCerts=[O=Xxxxxxx, L=New York, C=US], platformVersion=3, serial=1537381494853) {}
[INFO ] 2018-09-20T20:47:09,161Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[localhost:10007], legalIdentitiesAndCerts=[O=Xxxxx, L=New York, C=US], platformVersion=3, serial=1537381494948) {}
[INFO ] 2018-09-20T20:47:09,237Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Discarding older nodeInfo for O=Xxxxx, L=New York, C=US {}
[INFO ] 2018-09-20T20:47:09,237Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[localhost:10006], legalIdentitiesAndCerts=[O=Notary, L=New York, C=US], platformVersion=3, serial=1537381495396) {}
[INFO ] 2018-09-20T20:47:09,263Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - No previous node found {}
[INFO ] 2018-09-20T20:47:09,347Z [RxIoScheduler-2] messaging.P2PMessagingClient.updateBridgesOnNetworkChange - Updating bridges on network map change: NodeInfo(addresses=[localhost:10006], legalIdentitiesAndCerts=[O=Notary, L=New York, C=US], platformVersion=3, serial=1537381495396) {}
[INFO ] 2018-09-20T20:47:09,358Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Done adding node with info: NodeInfo(addresses=[localhost:10006], legalIdentitiesAndCerts=[O=Notary, L=New York, C=US], platformVersion=3, serial=1537381495396) {}
[INFO ] 2018-09-20T20:55:56,465Z [Thread-1 (ActiveMQ-client-global-threads)] messaging.RPCServer.clientArtemisMessageHandler - SUBMITTING {actor_id=user1, actor_owningIdentity=O=Xxxxx, L=New York, C=US, actor_store_id=NODE_CONFIG, invocation_id=bb90b749-06e8-4f8d-9ef1-b841d0e7be8e, invocation_timestamp=2018-09-20T20:55:56.465Z, session_id=84058489-40f2-4b91-9527-8e0cfe188294, session_timestamp=2018-09-20T20:46:55.554Z}
[INFO ] 2018-09-20T20:55:56,529Z [Thread-1 (ActiveMQ-client-global-threads)] messaging.RPCServer.clientArtemisMessageHandler - SUBMITTING {actor_id=user1, actor_owningIdentity=O=Xxxxx, L=New York, C=US, actor_store_id=NODE_CONFIG, invocation_id=3b9a0d1b-309f-4a08-bd6d-2e332ac7e069, invocation_timestamp=2018-09-20T20:55:56.529Z, session_id=84058489-40f2-4b91-9527-8e0cfe188294, session_timestamp=2018-09-20T20:46:55.554Z}
[INFO ] 2018-09-20T20:55:57,251Z [Node thread-1] flow.[8429c030-3a58-4c1a-985c-64eca7f4c54e].initiateSession - Initiating flow session with party O=Xxxxxx, L=New York, C=US. Session id for tracing purposes is SessionId(toLong=8801945676362020052). {}
[INFO ] 2018-09-20T20:55:57,371Z [Messaging DLGWNKZHEid91BXSUY1sSxtGkcoJRjwy3NCrXHXWzsxcNU] messaging.P2PMessagingClient.createQueueIfAbsent - Create fresh queue internal.peers.DL6ZbP6hVmkL3w2rysrMYHchy7axULJssDPkjUzxvn9DB6 bound on same address {}
[INFO ] 2018-09-20T20:55:57,439Z [Thread-1 (ActiveMQ-client-global-threads)] bridging.BridgeControlListener.processControlMessage - Received bridge control message Create(nodeIdentity=DLGWNKZHEid91BXSUY1sSxtGkcoJRjwy3NCrXHXWzsxcNU, bridgeInfo=BridgeEntry(queueName=internal.peers.DL6ZbP6hVmkL3w2rysrMYHchy7axULJssDPkjUzxvn9DB6, targets=[localhost:10010], legalNames=[O=Xxxxxx, L=New York, C=US])) {actor_id=user1, actor_owningIdentity=O=Xxxxx, L=New York, C=US, actor_store_id=NODE_CONFIG, invocation_id=3b9a0d1b-309f-4a08-bd6d-2e332ac7e069, invocation_timestamp=2018-09-20T20:55:56.529Z, session_id=84058489-40f2-4b91-9527-8e0cfe188294, session_timestamp=2018-09-20T20:46:55.554Z}
[INFO ] 2018-09-20T20:55:57,462Z [Thread-1 (ActiveMQ-client-global-threads)] peers.DL6ZbP6hVmkL3w2rysrMYHchy7axULJssDPkjUzxvn9DB6 -> localhost:10010:O=Xxxxxx, L=New York, C=US.start - Create new AMQP bridge {actor_id=user1, actor_owningIdentity=O=Xxxxx, L=New York, C=US, actor_store_id=NODE_CONFIG, invocation_id=3b9a0d1b-309f-4a08-bd6d-2e332ac7e069, invocation_timestamp=2018-09-20T20:55:56.529Z, session_id=84058489-40f2-4b91-9527-8e0cfe188294, session_timestamp=2018-09-20T20:46:55.554Z}
[INFO ] 2018-09-20T20:55:57,474Z [Thread-1 (ActiveMQ-client-global-threads)] netty.AMQPClient.start - connect to: localhost:10010 {actor_id=user1, actor_owningIdentity=O=Xxxxx, L=New York, C=US, actor_store_id=NODE_CONFIG, invocation_id=3b9a0d1b-309f-4a08-bd6d-2e332ac7e069, invocation_timestamp=2018-09-20T20:55:56.529Z, session_id=84058489-40f2-4b91-9527-8e0cfe188294, session_timestamp=2018-09-20T20:46:55.554Z}
[INFO ] 2018-09-20T20:55:57,569Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:55:58,571Z [nioEventLoopGroup-2-2] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:55:58,574Z [nioEventLoopGroup-2-3] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:55:59,576Z [nioEventLoopGroup-2-4] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:55:59,580Z [nioEventLoopGroup-2-5] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:00,582Z [nioEventLoopGroup-2-6] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:00,589Z [nioEventLoopGroup-2-7] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:01,591Z [nioEventLoopGroup-2-8] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:01,593Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:02,595Z [nioEventLoopGroup-2-2] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:02,599Z [nioEventLoopGroup-2-3] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:03,600Z [nioEventLoopGroup-2-4] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:03,603Z [nioEventLoopGroup-2-5] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:04,604Z [nioEventLoopGroup-2-6] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:04,606Z [nioEventLoopGroup-2-7] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:05,607Z [nioEventLoopGroup-2-8] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:05,610Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:06,612Z [nioEventLoopGroup-2-2] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:06,614Z [nioEventLoopGroup-2-3] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:07,616Z [nioEventLoopGroup-2-4] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:07,618Z [nioEventLoopGroup-2-5] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:08,620Z [nioEventLoopGroup-2-6] netty.AMQPClient.run - Retry connect to localhost:10010 {}
[INFO ] 2018-09-20T20:56:08,623Z [nioEventLoopGroup-2-7] netty.AMQPClient.operationComplete - Failed to connect to localhost:10010 {}
This is a node.conf file related to one of my nodes (all the others are in same format):
myLegalName="O=Xxxxx,L=New York,C=US"
basedir : "/opt/corda"
p2pAddress : "xxxxx:10002"
webAddress : "xxxxx:10004"
rpcSettings {
useSsl=false
address="xxxxx:10003"
adminAddress="xxxxx:10051"
}
keyStorePassword : "cordacadevpass"
trustStorePassword : "trustpass"
h2port : 11000
useHTTPS : false
devMode : true
rpcUsers=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
This is the notary node.conf file:
basedir : "/opt/corda"
p2pAddress : "notary:10002"
webAddress : "notary:10004"
h2port : 11000
myLegalName="O=Notary,L=New York,C=US"
detectPublicIp=false
keyStorePassword : "cordacadevpass"
trustStorePassword : "trustpass"
extraAdvertisedServiceIds: [ "corda.notary.simple" ]
useHTTPS : false
devMode : true
rpcSettings = {
useSsl=false
address="notary:10003"
adminAddress="notary:10052"
}
notary {
validating=false
}
rpcUsers=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
This is the docker-compose.yml file:
version: '3.3'
services:
Notary:
networks:
- corda
build:
context: .
args:
BUILDTIME_CORDA_VERSION: 3.2-corda
env_file:
- ./corda_docker.env
ports:
- "10002:10002"
image: corda/notary:2.0
container_name: notary
volumes:
- ./java-source/build/nodes/Notary/node.conf:/opt/corda/node.conf
- ./java-source/build/nodes/Notary/network-parameters:/opt/corda/network-parameters
- ./java-source/build/nodes/Notary/additional-node-infos:/opt/corda/additional-node-infos
- ./java-source/build/nodes/Notary/certificates/:/opt/corda/certificates/
- ./java-source/build/nodes/Notary/cordapps/:/opt/corda/cordapps/
Xxxxx:
networks:
- corda
build:
context: .
args:
BUILDTIME_CORDA_VERSION: 3.2-corda
env_file:
- ./corda_docker.env
ports:
- "10007:10002"
- "10008:10003"
- "10009:10004"
- "10048:10048"
image: corda/xxxxx:2.0
container_name: xxxxx
volumes:
- ./java-source/build/nodes/Xxxxx/node.conf:/opt/corda/node.conf
- ./java-source/build/nodes/Xxxxx/network-parameters:/opt/corda/network-parameters
- ./java-source/build/nodes/Xxxxx/additional-node-infos:/opt/corda/additional-node-infos
- ./java-source/build/nodes/Xxxxx/certificates/:/opt/corda/certificates/
- ./java-source/build/nodes/Xxxxx/cordapps/:/opt/corda/cordapps/
Xxxxxx:
networks:
- corda
build:
context: .
args:
BUILDTIME_CORDA_VERSION: 3.2-corda
env_file:
- ./corda_docker.env
ports:
- "10010:10002"
- "10011:10003"
- "10051:10051"
- "8888:10004"
image: corda/xxxxxx:2.0
container_name: xxxxxx
volumes:
- ./java-source/build/nodes/Xxxxxx/node.conf:/opt/corda/node.conf
- ./java-source/build/nodes/Xxxxxx/network-parameters:/opt/corda/network-parameters
- ./java-source/build/nodes/Xxxxxx/additional-node-infos:/opt/corda/additional-node-infos
- ./java-source/build/nodes/Xxxxxx/certificates/:/opt/corda/certificates/
- ./java-source/build/nodes/Xxxxxx/cordapps/:/opt/corda/cordapps/
Xxxxxxx:
networks:
- corda
build:
context: .
args:
BUILDTIME_CORDA_VERSION: 3.2-corda
env_file:
- ./corda_docker.env
ports:
- "10013:10002"
- "10014:10003"
- "10015:10004"
- "10054:10054"
image: corda/xxxxxxx:2.0
container_name: xxxxxxx
volumes:
- ./java-source/build/nodes/Xxxxxxx/node.conf:/opt/corda/node.conf
- ./java-source/build/nodes/Xxxxxxx/network-parameters:/opt/corda/network-parameters
- ./java-source/build/nodes/Xxxxxxx/additional-node-infos:/opt/corda/additional-node-infos
- ./java-source/build/nodes/Xxxxxxx/certificates/:/opt/corda/certificates/
- ./java-source/build/nodes/Xxxxxxx/cordapps/:/opt/corda/cordapps/
networks:
corda:
Which changes should I do in order to map correctly the nodes? As you can see, the node is calling itself instead of calling the other peer.
This transaction involves two peers.

It looks like you tried create your nodes by plugin: net.corda.plugins.Cordform.
But for nodes for Docker you should use net.corda.plugins.Dockerform.
Example for prepare Docker Nodes

Related

Dont start all browsers in selenoid with docker-compose

tell me about the docker:
I have Windows 10+WSL2+docker for win, installed selenoid in ubuntu, launched and downloaded the images. (chrome 90,91 etc..)
The container aero cube/selenoid and aerocube/selenoid-ui is successfully launched, the tests in it from IDEA pass with a bang.
I want to run tests in 2 versions of chrome via docker compose.
Config browser.json
{
"chrome": {
"default": "90.0",
"versions": {
"90.0": {
"env" : ["LANG=ru_RU.UTF-8", "LANGUAGE=ru:en", "LC_ALL=ru_RU.UTF-8", "TZ=Europe/Moscow"],
"image": "selenoid/chrome:90.0",
"tmpfs": {"/tmp": "size=512m"},
"hosts": ["x01.aidata.io:127.0.0.1"],
"port": "4444"
},
"91.0": {
"env": ["LANG=ru_RU.UTF-8", "LANGUAGE=ru:en", "LC_ALL=ru_RU.UTF-8", "TZ=Europe/Moscow"],
"image": "selenoid/chrome:91.0",
"tmpfs": {"/tmp": "size=512m"},
"hosts": ["x01.aidata.io:127.0.0.1"],
"port": "4444"
}
}
}
}
Config docker-compose.yaml
version: '3.4'
services:
selenoid:
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/init/selenoid:/etc/selenoid"
- "${PWD}/work/selenoid/video:/opt/selenoid/video"
- "${PWD}/work/selenoid/logs:/opt/selenoid/logs"
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=work/selenoid/video
command: ["-conf", "etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
network_mode: bridge
in IDEA:
#BeforeEach
public void initDriver() throws IOException {
final String url = "http://localhost:4444/wd/hub";
WebDriver driver = new RemoteWebDriver(new URL(url), DesiredCapabilities.chrome());
driver.manage().window().setSize(new Dimension(1920,1024));
WebDriverRunner.setWebDriver(driver);
}
#AfterEach
public void stopDriver() {
Optional.ofNullable(WebDriverRunner.getWebDriver()).ifPresent(WebDriver::quit);
}
It starts only the 90th version (it is the first in browser.json) passes successfully and closes ignoring everything else that needs to be corrected? )
with the docker, everything is OK, we figured it out, you will need to edit the configs of the grad for selenide
close topic

Cannot provision an actuator in IoT Agent Fiware

I am using the following python code to create a service group
import json
import requests
url = 'http://localhost:4041/iot/services'
headers = {'Content-Type': "application/json", 'fiware-service': "openiot", 'fiware-servicepath': "/mtp"}
data = {
"services": [
{
"apikey": "456dgffdg56465dfg",
"cbroker": "http://orion:1026",
"entity_type": "Door",
#resource attribute is left blank since HTTP communication is not being used
"resource": ""
}
]
}
res = requests.post(url, json=data, headers=headers)
#print(res.status_code)
if res.status_code == 201:
print("Created")
elif res.status_code == 409:
print("A resource cannot be created because it already exists")
else:
print (res.raise_for_status())
But when trying to provision an actuator I get a bad request 400 error with the code below:
import json
import requests
url = 'http://localhost:4041/iot/devices'
headers = {'Content-Type': "application/json", 'fiware-service': "openiot", 'fiware-servicepath': "/mtp"}
data = {
"devices": [
{
"device_id": "door003",
"entity_name": "urn:ngsi-ld:Door:door003",
"entity_type": "Door",
"protocol": "PDI-IoTA-UltraLight",
"transport": "MQTT",
"commands": [
{"name": "unlock","type": "command"},
{"name": "open","type": "command"},
{"name": "close","type": "command"},
{"name": "lock","type": "command"}
],
"attributes": [
{"object_id": "s", "name": "state", "type":"Text"}
]
}
]
}
res = requests.post(url, json=data, headers=headers)
#print(res.status_code)
if res.status_code == 201:
print("Created")
elif res.status_code == 409:
print("Entity cannot be created because it already exists")
else:
print (res.raise_for_status())
Here is the error message I get in console.
iot-agent | time=2021-02-17T11:39:44.132Z | lvl=DEBUG | corr=16f27639-49c2-4419-a926-2433805dbdb3 | trans=16f27639-49c2-4419-a926-2433805dbdb3 | op=IoTAgentNGSI.GenericMiddlewares | from=n/a | srv=smartdoor | subsrv=/mtp | msg=Error [BAD_REQUEST] handling request: Request error connecting to the Context Broker: 501 | comp=IoTAgent
iot-agent | time=2021-02-17T11:39:44.133Z | lvl=DEBUG | corr=390f5530-f537-4efa-980a-890a44153811 | trans=390f5530-f537-4efa-980a-890a44153811 | op=IoTAgentNGSI.DomainControl | from=n/a | srv=smartdoor | subsrv=/mtp | msg=response-time: 29 | comp=IoTAgent
What is strange is that if a remove the commands from the payload the device provisioning works fine. Is there something am I doing wrong while trying to provision an actuator (not a sensor)?
IoT Agent version:
{"libVersion":"2.14.0-next","port":"4041","baseRoot":"/","version":"1.15.0-next"}
Orion version:
{
"orion" : {
"version" : "2.2.0",
"uptime" : "0 d, 0 h, 59 m, 18 s",
"git_hash" : "5a46a70de9e0b809cce1a1b7295027eea0aa757f",
"compile_time" : "Thu Feb 21 10:28:42 UTC 2019",
"compiled_by" : "root",
"compiled_in" : "442fc4d225cf",
"release_date" : "Thu Feb 21 10:28:42 UTC 2019",
"doc" : "https://fiware-orion.rtfd.io/en/2.2.0/"
}
}
My docker-compose file looks as follows:
iot-agent:
image: fiware/iotagent-ul:latest
hostname: iot-agent
container_name: iot-agent
restart: unless-stopped
depends_on:
- mongo-db
networks:
- default
expose:
- "4041"
ports:
- "4041:4041"
environment:
- IOTA_CB_HOST=orion
- IOTA_CB_PORT=1026
- IOTA_NORTH_PORT=4041
- IOTA_REGISTRY_TYPE=mongodb
- IOTA_LOG_LEVEL=DEBUG
- IOTA_TIMESTAMP=true
- IOTA_CB_NGSI_VERSION=v2
- IOTA_AUTOCAST=true
- IOTA_MONGO_HOST=mongo-db
- IOTA_MONGO_PORT=27017
- IOTA_MONGO_DB=iotagentul
- IOTA_PROVIDER_URL=http://iot-agent:4041
- IOTA_MQTT_HOST=mosquitto
- IOTA_MQTT_PORT=1883
Thanks in advance.
Regards,

Serialization Error using Corda Docker Image

I get the following error (for each node), when I run the command docker-compose up. I configured the network parameters myself as well as the nodes, not using the network bootstrapper.
[ERROR] 08:07:48+0000 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Serialization scheme ([6D696E696D756D], P2
P) not supported. [errorCode=1e6peth, moreInformationAt=https://errors.corda.net/OS/4.0/1e6peth]
I have tried to change the properties in the network-parameters file, yet unsuccessfully so far.
Here are my config files:
myLegalName : "O=Notary, L=London, C=GB"
p2pAddress : "localhost:10008"
devMode : true
notary : {
validating : false
}
rpcSettings = {
address : "notary:10003"
adminAddress : "notary:10004"
}
rpcUsers=[
{
user="user"
password="test"
permissions=[
ALL
]
}
]
detectPublicIp : false
myLegalName : "O=PartyA, L=London, C=GB"
p2pAddress : "localhost:10005"
devMode : true
rpcSettings = {
address : "partya:10003"
adminAddress : "partya:10004"
}
rpcUsers=[
{
user=corda
password=corda_initial_password
permissions=[
ALL
]
}
]
detectPublicIp : false
myLegalName : "O=PartyB, L=London, C=GB"
p2pAddress : "localhost:10006"
devMode : true
rpcSettings = {
address : "partyb:10003"
adminAddress : "partyb:10004"
}
rpcUsers=[
{
user=corda
password=corda_initial_password
permissions=[
ALL
]
}
]
detectPublicIp : false
as well as my network-parameters file and my docker-compose.yml file:
minimumPlatformVersion=4
notaries=[NotaryInfo(identity=O=Notary, L=London, C=GB, validating=false)]
maxMessageSize=10485760
maxTransactionSize=524288000
whitelistedContractImplementations {
}
eventHorizon="30 days"
epoch=1
version: '3.7'
services:
Notary:
image: corda/corda-zulu-4.0:latest
container_name: Notary
networks:
- corda
volumes:
- ./nodes/notary_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
PartyA:
image: corda/corda-zulu-4.0:latest
container_name: PartyA
networks:
- corda
volumes:
- ./nodes/partya_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
- ./build/libs/:/opt/corda/cordapps
PartyB:
image: corda/corda-zulu-4.0:latest
container_name: PartyB
networks:
- corda
volumes:
- ./nodes/partyb_node.conf:/etc/corda/node.conf
- ./nodes/network-parameters:/opt/corda/network-parameters
- ./build/libs/:/opt/corda/cordapps
networks:
corda:
Many thanks in advance for your help!
It looks like it is indeed the issue with missing serialization scheme.
Also, in our most Corda 4.4 release, we have released an official image of the containerized Corda node.
Feel free to check out our most recent guide on how to start a docker format node. https://medium.com/corda/containerising-corda-with-corda-docker-image-and-docker-compose-af32d3e8746c

How to resolve service name to in docker swarm mode for hyperledger composer?

I am using docker swarm mode for hyperledger composer setup and I am new to docker. My fabric is running okay. When I use service names in connection.json file, it results into "REQUEST_TIMEOUT" while installing network. But when I use IP address of host instead of service name all works fine. So,how can I resolve service name/container name?
Here is my peer configuration:
peer1:
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
hostname: peer1.eprocure.org.com
image: hyperledger/fabric-peer:$ARCH-1.1.0
networks:
hyperledger-ov:
aliases:
- peer1.eprocure.org.com
environment:
- CORE_LOGGING_LEVEL=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer1.eprocure.org.com
- CORE_PEER_ADDRESS=peer1.eprocure.org.com:7051
- CORE_PEER_LOCALMSPID=eProcureMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1:5984
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledger-ov
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.eprocure.org.com:7051
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_PROFILE_ENABLED=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
volumes:
- /var/run/:/host/var/run/
- /export/composer/genesis-folder:/etc/hyperledger/configtx
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/peers/peer1.eprocure.org.com/msp:/etc/hyperledger/peer/msp
- /export/composer/crypto-config/peerOrganizations/eprocure.org.com/users:/etc/hyperledger/msp/users
ports:
- 8051:7051
- 8053:7053
And here is my current connection.json with IP
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://192.168.0.147:7051",
"eventUrl": "grpc://192.168.0.147:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://192.168.0.147:8051",
"eventUrl": "grpc://192.168.0.147:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://192.168.0.147:9051",
"eventUrl": "grpc://192.168.0.147:9053"
}
},
I have tried following before.
"peers": {
"peer0.eprocure.org.com": {
"url": "grpc://peers_peer0:7051",
"eventUrl": "grpc://peers_peer0:7053"
},
"peer1.eprocure.org.com": {
"url": "grpc://peers_peer1:8051",
"eventUrl": "grpc://peers_peer2:8053"
},
"peer2.eprocure.org.com": {
"url": "grpc://peers_peer2:9051",
"eventUrl": "grpc://peers_peer2:9053"
}
}
But this doesn't work.
Can anyone please let me know how can I solve my problem?

Taking data from twitter and load it to hdfs using Flume

I'm getting error when running following command in hadoop
bin/flume-ng agent -c /usr/local/hadoop/flume/conf -f usr/local/hadoop/flume/conf/flume-twitter.conf -n TwitterAgent - flume.root.logger=INFO,console
shows the below error when executing flume command
2016-09-28 17:38:23,508 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: HDFS, type: hdfs
2016-09-28 17:38:23,546 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:114)] Channel MemChannel connected to [Twitter, HDFS]
2016-09-28 17:38:23,565 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{Twitter=EventDrivenSourceRunner: { source:org.apache.flume.source.twitter.TwitterSource{name:Twitter,state:IDLE} }} sinkRunners:{HDFS=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#9238ca counterGroup:{ name:null counters:{} } }} channels:{MemChannel=org.apache.flume.channel.MemoryChannel{name: MemChannel}} }
2016-09-28 17:38:23,594 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel MemChannel
2016-09-28 17:38:23,653 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: CHANNEL, name: MemChannel: Successfully registered new MBean.
2016-09-28 17:38:23,654 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: CHANNEL, name: MemChannel started
2016-09-28 17:38:23,654 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink HDFS
2016-09-28 17:38:23,654 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source Twitter
2016-09-28 17:38:23,655 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.twitter.TwitterSource.start(TwitterSource.java:131)] Starting twitter source org.apache.flume.source.twitter.TwitterSource{name:Twitter,state:IDLE} ...
2016-09-28 17:38:23,659 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: HDFS: Successfully registered new MBean.
2016-09-28 17:38:23,659 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: HDFS started
2016-09-28 17:38:23,660 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.twitter.TwitterSource.start(TwitterSource.java:139)] Twitter source Twitter started.
2016-09-28 17:38:23,660 (Twitter Stream consumer-1[initializing]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] Establishing connection.
2016-09-28 17:38:25,544 (Twitter Stream consumer-1[Establishing connection]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] 404:The URI requested is invalid or the resource requested, such as a user, does not exist.
Unknown URL. See Twitter Streaming API documentation at http://dev.twitter.com/pages/streaming_api
2016-09-28 17:38:25,545 (Twitter Stream consumer-1[Establishing connection]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] Waiting for 10000 milliseconds
2016-09-28 17:38:35,547 (Twitter Stream consumer-1[Waiting for 10000 milliseconds]) [ERROR - org.apache.flume.source.twitter.TwitterSource.onException(TwitterSource.java:331)] Exception while streaming tweets
404:The URI requested is invalid or the resource requested, such as a user, does not exist.
Unknown URL. See Twitter Streaming API documentation at http://dev.twitter.com/pages/streaming_api
Relevant discussions can be found on the Internet at:
http://www.google.co.jp/search?q=ec814753 or
http://www.google.co.jp/search?q=0a74cca1
TwitterException{exceptionCode=[ec814753-0a74cca1], statusCode=404, retryAfter=-1, rateLimitStatus=null, featureSpecificRateLimitStatus=null, version=2.2.6}
at twitter4j.internal.http.HttpClientImpl.request(HttpClientImpl.java:185)
at twitter4j.internal.http.HttpClientWrapper.request(HttpClientWrapper.java:65)
at twitter4j.internal.http.HttpClientWrapper.get(HttpClientWrapper.java:93)
at twitter4j.TwitterStreamImpl.getSampleStream(TwitterStreamImpl.java:160)
at twitter4j.TwitterStreamImpl$4.getStream(TwitterStreamImpl.java:149)
at twitter4j.TwitterStreamImpl$4.getStream(TwitterStreamImpl.java:147)
at twitter4j.TwitterStreamImpl$TwitterStreamConsumer.run(TwitterStreamImpl.java:426)
2016-09-28 17:38:35,571 (Twitter Stream consumer-1[Waiting for 10000 milliseconds]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] Establishing connection.
2016-09-28 17:38:37,049 (Twitter Stream consumer-1[Establishing connection]) [INFO - twitter4j.internal.logging.SLF4JLogger.info(SLF4JLogger.java:83)] 404:The URI requested is invalid or the resource requested, such as a user, does not exist.
Unknown URL. See Twitter Streaming API documentation at http://dev.twitter.com/pages/streaming_api
i have added the configuration file it looks like the following
witterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = xxxxx
TwitterAgent.sources.Twitter.consumerSecret = xxxx
TwitterAgent.sources.Twitter.accessToken = xxxx
TwitterAgent.sources.Twitter.accessTokenSecret = xxxx
TwitterAgent.sources.Twitter.keywords = INDIA VS NEWZELAND, apache spark, spark, flume, apache mahout, kafka
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:54310/Flume_twitter_data/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
if any one knows please help me
thank u

Resources