Channgel Creation Failed. GRPC timeout - docker

I'm following the getting started steps to setup fabric environment on my mac, from the steps mentioned here :
When I'm trying to start my network using the ./network_setup.sh up script, I'm getting the following grpc timeout error (as shown in the images attached
anyone has any idea what am I missing ?

So I got it working. The issue was something might have updated the .proto files. I had to run the following command from the fabric directory to generate/update the respective pb.go files.
make protos

Related

Problem understanding how to, if at all possible, run my docker file (.tar)

I received a .tar docker file from a friend that told me that it should contain all dependences for a program that I've been struggling to get working and that all I need to do is "run" the Docker file. The Docker file is of a .tar format and is around 3.1 GB. The program this file was setup to run is call opensimrt. The GitHub link to the file is as follows:
https://github.com/mitkof6/OpenSimRT
The google drive link to the Docker file is as follows:
https://drive.google.com/file/d/1M-5RnnBKGzaoSB4MCktzsceU4tWCCr3j/view?usp=sharing
This program has many dependencies, some big ones to note is that it runs off ubuntu 18.04 and Opensim 4.1.
I'm not a computer scientist by any means, so I've been struggling to even learn to do docker basics like load and run a image. However, I desperately need this program to work. If you have any steps or advice on how to run this .tar I'd greatly appreciate it. Alternatively if you are able to find a way to get opensimrt up and running and can post those steps I'd be more than happy with that solution as well.
I've tried the commands "docker run" and "docker load" followed by their respective tags, file paths, args..etc. However, even when I fix various issues I always get stuck with a missing var/lib/docker/tmp/docker-import-....(random numbers) file. The numbers change every so often when trying to solve the issue, but eventually I always end up getting some variation of this error: Error response from daemon: open /var/lib/docker/tmp/docker-import-3640220538/bin/json: no such file or directory.
ps: I have extracted the .tar already and there is no install guide/instruction, .exe, install application. As a result I'm not sure how to get the program installed and running.

Cannot create docs for components in backstage docker error

I am trying to display docs stored in repository created by backstage io component on backstage-io /docs page UI, but when I am trying to access the docs I am getting the following error
Building a newer version of this documentation failed. Error: "Failed to generate docs from C:\\Users\\Admin\\AppData\\Local\\Temp\\backstage-enprxk into C:\\Users\\Admin\\AppData\\Local\\Temp\\techdocs-tmp-W6iVab; caused by Error: Docker container returned a non-zero exit code (1)"
Files in my repository
docs folder only having index.md
and mkdocs.yml have
nav:
Home: index.md
I was getting similar issues working on a local POC of Backstage. The biggest problem was that I needed to install pip, python, mkdocs, and mkdocs-techdocs-core (i.e. pip3 install mkdocs-techdocs-core). If you have done that and then followed everything in this documentation, then it should start working. Hope that helps. I spent a couple of days trying to get past these types of errors.
For me the above issue is fixed by using below as it was not working inside my container in kubernetes.
I changed app-config.yaml -
techdocs:
builder: 'local' # Alternatives - 'external'
generator:
runIn: 'local' // changed from docker to local here

Use Of experiments=no_use_multiple_sdk_containers in Google cloud dataflow

Issue Summary:
Hi,
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema. I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
To Solve the issue , we set an experiment flag (--experiments=no_use_multiple_sdk_containers ) which ran fine.
I want to know a better solution of my issue and also does the above flag will effect the pipeline performance.
Please try with the dataflow run command:
--prebuild_sdk_container_engine=cloud_build --experiments=use_runner_v2
this would use cloud build to build the container with your extra dependencies and then would use it within the dataflow run.

Prisma deploy gives error GraphQL Tutorial

I am currently following a GraphQL Tutorial and am on the "Add a Database" section explaining how to set up Prisma with GraphQL.
I finished adding the information in the prisma.yml file, installed prisma, ran prisma deploy and followed the steps, but every time I run prisma info, prisma token, or prisma deploy again, I keep getting the same error:
TypeError: url_1.URL is not a constructor
I have been trying to look up solutions to the error. I tried updating node and npm, downloading docker but nothing seems to be making any difference.
Help would be greatly appreciated.
You are running into this issue.
Basically, this is a known issue with node 6.x.
Please update the node version to fix this issue and track the progress of above issue in case you want to continue to use node 6.x.

Hyperledger Iroha: Can't generate genesis-block

I am trying to generate a new genesis-block in Hyperledger Iroha as it is suggested in
https://iroha.readthedocs.io/en/latest/getting_started/index.html#starting-iroha-node
and
https://hyperledger.github.io/iroha-api/#create-genesis-block
but unfortunately I can't do it because I am always getting the same error message.
$ cat peer.list
localhost:10001
$ ./iroha-cli --genesis_block --peers_address peer.list
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::out_of_range> >'
what(): bimap<>: invalid key
Aborted (core dumped)
I am receiving this error both on my local machine where I had compiled Iroha from scratch using the source code, as well as within an Iroha container.
I think I have the correct dependencies, otherwise I would have not been able to build Iroha from scratch. Also, note that I can start irohad correctly by using the configuration example from https://iroha.readthedocs.io/en/latest/getting_started/index.html#launching-iroha-daemon.
Any help or suggestion is greatly appreciated.
There was, indeed, a bug affecting the permissions needed to generate a block. It is fixed now and should not occur: https://github.com/hyperledger/iroha/pull/1351
This is a known issue in the development of hyperledger iroha, see here: https://github.com/hyperledger/iroha/issues/1362.
It arises when iroha is compiled with Ansible Playbook.
Try to uninstall Ansible from your system and re-compile iroha and you shouldn't encounter the same error.
Obviously this is just a work around, and you won't be able to take advantage of the ansible capabilities.

Resources