BACKGROUND:
We are trying to deploy App as a docker container through AWS-Greengrass Connector Service to the edge device (Running Greengrass core as container in Linux env).
We are configuring the greengrass group connector in cloud for docker app deployment.
ISSUES:
While deploying from AWS greengrass group (AWS cloud), we are able to see successful deployment message, but application is not getting deployed to the edge device (running greengrass core as container).
LOGS:
DockerApplicationDeploymentLog:
[2020-11-05T10:35:42.632Z][FATAL]-lambda_runtime.py:381,Failed to initialize Lambda runtime due to exception: "getgrnam(): name not found: 'docker'"
[2020-11-05T10:35:44.789Z][WARN]-ipc_client.py:162,deprecated arg port=8000 will be ignored
[2020-11-05T10:35:45.012Z][WARN]-ipc_client.py:162,deprecated arg port=8000 will be ignored
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:41,docker deployer starting up
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:45,checking inputs
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:52,docker group permissions
[2020-11-05T10:35:45.02Z][FATAL]-lambda_runtime.py:141,Failed to import handler function "handlers.function_handler" due to exception: "getgrnam(): name not found: 'docker'"
RuntimeSystemLog:
[2020-11-05T10:31:49.78Z][DEBUG]-Restart worker because it was killed. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Reserve worker. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Doing start attempt: {"Attempt count": 0, "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Creating directory. {"dir": "/greengrass/ggc/packages/1.11.0/var/lambda/8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.78Z][DEBUG]-changed ownership {"path": "/greengrass/ggc/packages/1.11.0/var/lambda/8b0ee21d-e481-4d27-5e30-cb4d912547f5", "new uid": 121, "new gid": 121}
[2020-11-05T10:31:49.782Z][DEBUG]-Resolving environment variable {"Variable": "PYTHONPATH=/greengrass/ggc/deployment/lambda/arn.aws.lambda.ap-south-1.aws.function.DockerApplicationDeployment.6"}
[2020-11-05T10:31:49.79Z][DEBUG]-Resolving environment variable {"Variable": "PATH=/usr/bin:/usr/local/bin"}
[2020-11-05T10:31:49.799Z][DEBUG]-Resolving environment variable {"Variable": "DOCKER_DEPLOYER_DOCKER_COMPOSE_DESTINATION_FILE_PATH=/home/ggc_user"}
[2020-11-05T10:31:49.82Z][DEBUG]-Creating new worker. {"functionArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.82Z][DEBUG]-Starting worker process. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.829Z][DEBUG]-Worker process started. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "pid": 20471}
[2020-11-05T10:31:49.83Z][DEBUG]-Start work result: {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "state": "Starting", "initDurationSeconds": 0.012234454}
[2020-11-05T10:31:49.831Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "pid": 20471}
[2020-11-05T10:31:53.155Z][DEBUG]-Received a credential provider request {"serverLambdaArn": "arn:aws:lambda:::function:GGTES", "clientId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager getting work {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "funcArn": "arn:aws:lambda:::function:GGTES", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-Successfully GET work. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "fromWorkerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.156Z][DEBUG]-POST work result. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager putting work result. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager put work result successfully. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-Successfully POST work result. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.157Z][DEBUG]-Handled a credential provider request {"clientId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:53.158Z][DEBUG]-GET work item. {"fromWorkerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.158Z][DEBUG]-Worker timer doesn't exist. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df"}
Did you doublecheck to meet the requirments listed in
https://docs.aws.amazon.com/greengrass/latest/developerguide/docker-app-connector.html
https://docs.aws.amazon.com/greengrass/latest/developerguide/docker-app-connector.html#docker-app-connector-linux-user
I dont know this particular error, but it complains about some missing basic user/group settings:
[2020-11-05T10:35:42.632Z][FATAL]-lambda_runtime.py:381,Failed to initialize Lambda runtime due to exception: "getgrnam(): name not found: 'docker'"
Related
I'm getting this error while trying to run cube js with the default command in the getting started docs. I've started this in a folder and running it in docker.
Warning. There is no cube.js file. Continue with environment variables
π₯ Cube Store (0.28.31) is assigned to 3030 port.
Warning. Option apiSecret is required in dev mode. Cube.js has generated it as e3b8c5a35fe378f4d481ada777e5f3c4
π Authentication checks are disabled in developer mode. Please use NODE_ENV=production to enable it.
π¦
Dev environment available at http://localhost:4000
π Cube.js server (0.28.31) is listening on 4000
2021-09-03 15:06:01,512 INFO [cubestore::http::status] <pid:17> Serving status probes at 0.0.0.0:3031
2021-09-03 15:06:01,515 INFO [cubestore::metastore] <pid:17> Using existing metastore in /cube/conf/.cubestore/data/metastore
thread '
main
' panicked at '
called `Result::unwrap()` on an `Err` value: Error { message: "IO error: While fsync: a directory: Invalid argument" }
', /project/cubestore/src/metastore/mod.rs:1542:40
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Cube Store Start Error: undefined
I guess itβs corrupted metastore due to it was incorrectly shutdown for you locally. Could you please try to drop the .cubestore directory?
I am facing problems deploying a smart contract on my private blockchain network. I created my blockchain network on three VMs (miners) using puppeth on a fourth VM (controller) by following the steps in this blog: https://medium.com/#collin.cusce/using-puppeth-to-manually-create-an-ethereum-proof-of-authority-clique-network-on-aws-ae0d7c906cce
Afterwards, I installed truffle on one of the miners VMs and i initialized truffle using the command:
truffle init
Then I wrote a simple hello world smart contract, compiled it and deployed it on truffle development blockchain and it worked. However, I tried to deploy it on my private blockchain but I can't connect to the network.
The admin.nodeInfo command in geth console returns the folowing output:
docker exec -it 954cd3955065 geth attach ipc:/root/.ethereum/geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5
coinbase: 0xe8cc4bea2cfdfd14cddefe1141bedd109576b9a9
at block: 78558 (Tue Dec 01 2020 22:01:02 GMT+0000 (UTC))
datadir: /root/.ethereum
modules: admin:1.0 clique:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
To exit, press ctrl-d
> admin.nodeInfo
{
enode: "enode://7206ca3c62f6db47e1230dcf14a765d4c9b4870a66470dbb21fcc5ed2fab2167d6bcc47eec8044c42037b3e6e0017aeb8ddfc3580471da54a6c7274a0c1fe46b#10.100.2.32:30303",
enr: "enr:-Je4QGXlVAESp8r2s1uHBJxoDLWQo8IvZsbe5sX2YRBb0un9Gdlt8nfDKQBR_j0lDPtaoCCuis4cJJlqtEHfa4tLO2EIg2V0aMfGhG5b-B6AgmlkgnY0gmlwhApkAiCJc2VjcDI1NmsxoQNyBso8YvbbR-EjDc8Up2XUybSHCmZHDbsh_MXtL6shZ4N0Y3CCdl-DdWRwgnZf",
id: "027a351994ac1b127df56180b6210310cc0164f17f1b12c167cb167c4ffaa122",
ip: "10.100.2.32",
listenAddr: "[::]:30303",
name: "Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5",
ports: {
discovery: 30303,
listener: 30303
},
protocols: {
eth: {
config: {
byzantiumBlock: 0,
chainId: 1515,
clique: {...},
constantinopleBlock: 0,
eip150Block: 0,
eip150Hash: "0x0000000000000000000000000000000000000000000000000000000000000000",
eip155Block: 0,
eip158Block: 0,
homesteadBlock: 0,
istanbulBlock: 0,
petersburgBlock: 0
},
difficulty: 98201,
genesis: "0x17f752387c901db617cf0594ecd2cb9811dfcd666318c2e0e7cb0239471da979",
head: "0xf8a37d0390558746901faa55463c127c553f02cf2d23ce0cb469fcd470c810f9",
network: 1515
}
}
}
I tried adding the network configuration in truffle-config.js like this:
devnet2: {
host: "localhost",
port: "30303", //port where the node is
network_id: "*",
from: 0x91cd7b879fefff34259d577a56d290b3315bf9b3 // Treats this network as if it was a public net. (default: false)
}
then, when deploying using the command truffle deploy --network devnet2 i always get this error:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56
throw new Error(errorMessage);
^
Error: There was a timeout while attempting to connect to the network.
Check to see that your provider is valid.
If you have a slow internet connection, try configuring a longer timeout in your Truffle config. Use the networks[networkName].networkCheckTimeout property to do this.
at Timeout.setTimeout (/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56:1)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
I tried extending the timeout limit but it didn't work. I also tried using Web3 Providers (HTTPProvider and IPCProvider) but without any luck (i can give more details, if needed).
Any help is well appreciated because i spent a lot of time on it without getting anywhere. Unfortunately, i couldn't find anything on deploying smart contracts to a node that is running on docker. If needed, i can gladly give more details about what i did.
I managed to run smart contracts on a private network, not using docker however. Some things come to mind. did you run a miner on your network? you will need to run a miner so that the contract gets migrated. Did you make sure that the gaslimit is met when running the contract? the miners will wait for the max gas limit to be reached before processing any request.
Did you already deploy the contract? in the migration scripts you either create a new migration script by bumping the version or use the reset flag to run all migration scripts again.
I am currently using Jenkins 2.138.1, Oracle Weblogic 12c.
I tried to deploy the war file into Weblogic server by using Jenkins Weblogic Deployer Plugin.
The Jenkins Build will be triggered every midnight 3am.
Sometimes the build is success, but sometimes failed with the error below:
Deployment Plan: null
App root: null
App config: null
Deployment Options: {
isRetireGracefully=true,
isGracefulProductionToAdmin=false,
isGracefulIgnoreSessions=false,
rmiGracePeriod=-1,
retireTimeoutSecs=-1,
undeployAllVersions=false,
archiveVersion=null,
planVersion=null,
isLibrary=false,
libSpecVersion=null,
libImplVersion=null,
stageMode=null,
clusterTimeout=3600000,
altDD=null,
altWlsDD=null,
name=myapp,
securityModel=null,
securityValidationEnabled=false,
versionIdentifier=null,
isTestMode=false,
forceUndeployTimeout=0,
defaultSubmoduleTargets=true,
timeout=0,
deploymentPrincipalName=null,
useExpiredLock=false
}
[BasicOperation.execute():445] : Initiating undeploy operation for app,
myApp, on targets: [BasicOperation.execute():447] : MyMgdSvr1
weblogic.management.provider.EditWaitTimedOutException: Waited 0
milliseconds at weblogic.utils.StackTraceDisabled.unknownMethod()
I think I have found the solution, the issue is actually caused by it cannot obtain the edit lock from Weblogic admin server.
Therefore before I un-deploy and deploy the application, I overwritten the user lock so that my instance is able to obtain the lock.
While trying to build an awx image (Ansible works) for ppc64le, the following comes up:
TASK [image_build : Build AWX distribution using container] ***************************************************************************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Error creating container: 400 Client Error: Bad Request (\"invalid reference format\")"}
to retry, use: --limit #/root/awx/installer/install.retry
PLAY RECAP ****************************************************************************************************************************************************************************************************
localhost : ok=10 changed=3 unreachable=0 failed=1
How can I see what really happens in the background? Any verbose docker logs that I can look at? The message itself is somewhat useless to me. I already set Ansible to verbose but this also was of no help.
Docker image names can only consist of lowercase (a-z) characters.
Either you are giving a un-supported image name or the variable(or paths) passed to the buid(or the container) cannot be resolved.
To enable debug logs, add "--debug" to docker daemon (/etc/systemd/system/multi-user.target.wants/docker.service for systemd based linux env)
For reference: https://docs.docker.com/config/daemon/#configure-the-docker-daemon
I'm trying to deploy car auction sample .bna file to HLF v0.6 service on Bluemix and getting different errors.
My connection profile for Bluemix:
{
"type": "hlf",
"membershipServicesURL": "grpcs://1c0b2dabbb834804ae3d284fed9059f4-ca.us.blockchain.ibm.com:30002",
"peerURL": "grpcs://1c0b2dabbb834804ae3d284fed9059f4-vp0.us.blockchain.ibm.com:30002",
"eventHubURL": "grpcs://1c0b2dabbb834804ae3d284fed9059f4-vp0.us.blockchain.ibm.com:31002",
"keyValStore": "/Users/me/.composer-credentials",
"deployWaitTime": "3000",
"invokeWaitTime": "1000",
"certificate": "-----BEGIN CERTIFICATE-----\nMIID6TCCA26gAwIBAgIQCiYEWw1faoRpM2xufaiPLTAKBggqhkjOPQQDAjBMMQsw\nCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMSYwJAYDVQQDEx1EaWdp\nQ2VydCBFQ0MgU2VjdXJlIFNlcnZlciBDQTAeFw0xNjA2MDcwMDAwMDBaFw0xOTA2\nMTIxMjAwMDBaMIGJMQswCQYDVQQGEwJVUzERMA8GA1UECBMITmV3IFlvcmsxDzAN\nBgNVBAcTBkFybW9uazE0MDIGA1UEChMrSW50ZXJuYXRpb25hbCBCdXNpbmVzcyBN\nYWNoaW5lcyBDb3Jwb3JhdGlvbjEgMB4GA1UEAwwXKi51cy5ibG9ja2NoYWluLmli\nbS5jb20wWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARTKAZypDOqw34HWujQeL82\nj1e9rN1inpN6ngrq49+OpYIe8ckHnJhsWPpf+zeIQePboDQVUTDtYXh7212BsVoX\no4IB8jCCAe4wHwYDVR0jBBgwFoAUo53mH/naOU/AbuiRy5Wl2jHiCp8wHQYDVR0O\nBBYEFK+1RoBnUnb8nr2hNtkUu3FRrbYuMDkGA1UdEQQyMDCCFyoudXMuYmxvY2tj\naGFpbi5pYm0uY29tghV1cy5ibG9ja2NoYWluLmlibS5jb20wDgYDVR0PAQH/BAQD\nAgeAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjBpBgNVHR8EYjBgMC6g\nLKAqhihodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vc3NjYS1lY2MtZzEuY3JsMC6g\nLKAqhihodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vc3NjYS1lY2MtZzEuY3JsMEwG\nA1UdIARFMEMwNwYJYIZIAYb9bAEBMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3\nLmRpZ2ljZXJ0LmNvbS9DUFMwCAYGZ4EMAQICMHsGCCsGAQUFBwEBBG8wbTAkBggr\nBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQuY29tMEUGCCsGAQUFBzAChjlo\ndHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGlnaUNlcnRFQ0NTZWN1cmVTZXJ2\nZXJDQS5jcnQwDAYDVR0TAQH/BAIwADAKBggqhkjOPQQDAgNpADBmAjEA7LViaN74\nOwIp/zqfwSRvURg965+m73/edCeNKrsLf6GuE0sLwpX6pQNnDlr6SzGnAjEA+qk0\nsYRnd2gCQeD9fWbCJIw0vJDqeZr1WJ64aVoJ8kyASzY/yoarSm2wqujXJwEf\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDrDCCApSgAwIBAgIQCssoukZe5TkIdnRw883GEjANBgkqhkiG9w0BAQwFADBh\nMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\nd3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD\nQTAeFw0xMzAzMDgxMjAwMDBaFw0yMzAzMDgxMjAwMDBaMEwxCzAJBgNVBAYTAlVT\nMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxJjAkBgNVBAMTHURpZ2lDZXJ0IEVDQyBT\nZWN1cmUgU2VydmVyIENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE4ghC6nfYJN6g\nLGSkE85AnCNyqQIKDjc/ITa4jVMU9tWRlUvzlgKNcR7E2Munn17voOZ/WpIRllNv\n68DLP679Wz9HJOeaBy6Wvqgvu1cYr3GkvXg6HuhbPGtkESvMNCuMo4IBITCCAR0w\nEgYDVR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAYYwNAYIKwYBBQUHAQEE\nKDAmMCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wQgYDVR0f\nBDswOTA3oDWgM4YxaHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0R2xv\nYmFsUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggrBgEFBQcCARYc\naHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUo53mH/naOU/A\nbuiRy5Wl2jHiCp8wHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUwDQYJ\nKoZIhvcNAQEMBQADggEBAMeKoENL7HTJxavVHzA1Nm6YVntIrAVjrnuaVyRXzG/6\n3qttnMe2uuzO58pzZNvfBDcKAEmzP58mrZGMIOgfiA4q+2Y3yDDo0sIkp0VILeoB\nUEoxlBPfjV/aKrtJPGHzecicZpIalir0ezZYoyxBEHQa0+1IttK7igZFcTMQMHp6\nmCHdJLnsnLWSB62DxsRq+HfmNb4TDydkskO/g+l3VtsIh5RHFPVfKK+jaEyDj2D3\nloB5hWp2Jp2VDCADjT7ueihlZGak2YPqmXTNbk19HOuNssWvFhtOyPNV6og4ETQd\nEa8/B6hPatJ0ES8q/HO3X8IVQwVs1n3aAr0im0/T+Xc=\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh\nMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\nd3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD\nQTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT\nMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j\nb20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG\n9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB\nCSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97\nnh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt\n43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P\nT19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4\ngdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO\nBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR\nTLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw\nDQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr\nhMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg\n06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF\nPnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls\nYSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk\nCAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=\n-----END CERTIFICATE-----\n",
"certificatePath": "/certs/peer/cert.pem"
}
I'm executing the following command:
composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s PASSS
I tried this many times and I'm getting one of the following errors:
I. Security handshake:
$ composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s 97b116b3c4
Deploying business network from archive: carauction-network/carauction-network#0.0.7.bna
Business network definition:
Identifier: carauction-network#0.0.7
Description: Car Auction Business Network
E0528 10:59:18.962200000 123145570217984 handshake.c:128]
Security handshake failed:
{"created":"#1495940358.962177000","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1495940358.962172000","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_poll_posix.c","file_line":427}]}
Error
Command failed
II. Unhandled 'error' event:
$ composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s 97b116b3c4
Deploying business network from archive: carauction-network/carauction-network#0.0.7.bna
Business network definition:
Identifier: carauction-network#0.0.7
Description: Car Auction Business Network
events.js:160
throw er; // Unhandled 'error' event
^
Error: unknown service protos.Events
at ClientDuplexStream._emitStatusIfDone
(/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._receiveStatus
(/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:169:8)
at /usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:634:14
III. Identity or token does not match:
$ composer network deploy -p bluemix -a sample-networks/packages/carauction-network/carauction-network#0.0.7.bna -i admin -s 97b116b3c4
Deploying business network from archive: carauction-network/carauction-network#0.0.7.bna
Business network definition:
Identifier: carauction-network#0.0.7
Description: Car Auction Business Network
Error: Identity or token does not match.
Command failed
I feel "SSL Handshake problem" (I) and "Unhandled 'error' event" (II) are related to the old issue with HFC not handling properly GRPC disconnects Is it correct?. What I can't figure out is what's causing "Identity or token does not match" (III). My current guess is that admin user does not have a wallet created yet (can't see it in my ~/.composer-credentials folder). Is composer deploy supposed to create wallet automatically if it does not yet exists?
Ok, I did some more experiments, and here is what I've learned:
It was a problem in my profile's connection.json. When I copied and modified one from the answer to this question: Fabric composer integration with Bluemix blockchain service it start working.
I was setting long timeouts in connection.json, but CLI command still ends with the following error:
events.js:160
throw er; // Unhandled 'error' event
^
Error: {"created":"#1496109180.720017000","description":"Secure read failed","file":"../src/core/lib/security/transport/secure_endpoint.c","file_line":157,"grpc_status":14,"referenced_errors":[{"created":"#1496109180.720007000","description":"OS Error","errno":54,"file":"../src/core/lib/iomgr/tcp_posix.c","file_line":229,"os_error":"Connection reset by peer","syscall":"recvmsg"}]}
at ClientDuplexStream._emitStatusIfDone (/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._receiveStatus (/usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:169:8)
at /usr/local/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:634:14
At the same time chaicode gets deployed. Still not sure what is causing it.
Since composer's deployment command is finished with error, the mapping between composer's network ID and deployed chaincode ID isn't added to. Which means, it needs to be added manually, by adding something like this to a respective connection.json:
"networks": {
"carauction-network": "8f637b9886357fb3e24864cfa36f9cdae84e587028a08074d856e9b6635afa76"
}