Hyperledger Iroha: Can't generate genesis-block - hyperledger

I am trying to generate a new genesis-block in Hyperledger Iroha as it is suggested in
https://iroha.readthedocs.io/en/latest/getting_started/index.html#starting-iroha-node
and
https://hyperledger.github.io/iroha-api/#create-genesis-block
but unfortunately I can't do it because I am always getting the same error message.
$ cat peer.list
localhost:10001
$ ./iroha-cli --genesis_block --peers_address peer.list
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::out_of_range> >'
what(): bimap<>: invalid key
Aborted (core dumped)
I am receiving this error both on my local machine where I had compiled Iroha from scratch using the source code, as well as within an Iroha container.
I think I have the correct dependencies, otherwise I would have not been able to build Iroha from scratch. Also, note that I can start irohad correctly by using the configuration example from https://iroha.readthedocs.io/en/latest/getting_started/index.html#launching-iroha-daemon.
Any help or suggestion is greatly appreciated.

There was, indeed, a bug affecting the permissions needed to generate a block. It is fixed now and should not occur: https://github.com/hyperledger/iroha/pull/1351

This is a known issue in the development of hyperledger iroha, see here: https://github.com/hyperledger/iroha/issues/1362.
It arises when iroha is compiled with Ansible Playbook.
Try to uninstall Ansible from your system and re-compile iroha and you shouldn't encounter the same error.
Obviously this is just a work around, and you won't be able to take advantage of the ansible capabilities.

Related

Azerothcore error azerothcore-wotlk-ac-worldserver-1 | MMAP:loadMap: 5303231.mmtile was built with generator v15, expected v16

I recently updated my Azerothcore docker version, but I am encountering an error: "azerothcore-wotlk-ac-worldserver-1 | MMAP:loadMap: 5303231.mmtile was built with generator v15, expected v16". I have tried deleting AC and installing from scratch following the guide on the Azerothcore website, but when I run "./acore.sh docker client-data", it is still downloading client data files for v15 instead of the latest version v16. I have followed the Docker Setup and Azerothcore Bash Dashboard setup, but I am still having issues. Does anyone know how I can fix this problem?
Thanks in advance for any help.
Best regards.
This sounds to me like a PR is required to fix the setup process of docker. If a manual swap of the client data isn't sufficient, only an adjustment of the docker script can solve this.
You should create an issue about it in the respective repo you've cloned, azerothcore-wotlk or acore-docker.
Edit: Please update and try again due to https://github.com/azerothcore/azerothcore-wotlk/pull/14527

Cannot create docs for components in backstage docker error

I am trying to display docs stored in repository created by backstage io component on backstage-io /docs page UI, but when I am trying to access the docs I am getting the following error
Building a newer version of this documentation failed. Error: "Failed to generate docs from C:\\Users\\Admin\\AppData\\Local\\Temp\\backstage-enprxk into C:\\Users\\Admin\\AppData\\Local\\Temp\\techdocs-tmp-W6iVab; caused by Error: Docker container returned a non-zero exit code (1)"
Files in my repository
docs folder only having index.md
and mkdocs.yml have
nav:
Home: index.md
I was getting similar issues working on a local POC of Backstage. The biggest problem was that I needed to install pip, python, mkdocs, and mkdocs-techdocs-core (i.e. pip3 install mkdocs-techdocs-core). If you have done that and then followed everything in this documentation, then it should start working. Hope that helps. I spent a couple of days trying to get past these types of errors.
For me the above issue is fixed by using below as it was not working inside my container in kubernetes.
I changed app-config.yaml -
techdocs:
builder: 'local' # Alternatives - 'external'
generator:
runIn: 'local' // changed from docker to local here

Use Of experiments=no_use_multiple_sdk_containers in Google cloud dataflow

Issue Summary:
Hi,
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema. I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
To Solve the issue , we set an experiment flag (--experiments=no_use_multiple_sdk_containers ) which ran fine.
I want to know a better solution of my issue and also does the above flag will effect the pipeline performance.
Please try with the dataflow run command:
--prebuild_sdk_container_engine=cloud_build --experiments=use_runner_v2
this would use cloud build to build the container with your extra dependencies and then would use it within the dataflow run.

Channgel Creation Failed. GRPC timeout

I'm following the getting started steps to setup fabric environment on my mac, from the steps mentioned here :
When I'm trying to start my network using the ./network_setup.sh up script, I'm getting the following grpc timeout error (as shown in the images attached
anyone has any idea what am I missing ?
So I got it working. The issue was something might have updated the .proto files. I had to run the following command from the fabric directory to generate/update the respective pb.go files.
make protos

AWS Appium Project Package

I see below error while packaging appium project for AWS.
Unknown lifecycle phase --DskipTests=true. You must specify a valid lifecycle phase or a goal in the format
Note:
Executing packaging command on mac terminal
Tried both --DskipTests=true & -DskipTests=true (see same error for both)
Looking forward for some help. Thanks!
I work for the AWS Device Farm team.
I have seen this error when users copy-paste the command from the documentation.
We are working on updating this as there seems to be some unknown characters that get introduced.
Users have got this to work by deleting -DskipTests=true and typing it instead of copy pasting it.
Since you are on a Mac terminal you will need to use a single dash '-' for the parameter.
Apologies for the inconvenience.
Hope this helps.

Resources