I'm following the tutorial on how to build and install Hyperledger Iroha 2 (https://hyperledger.github.io/iroha-2-docs/guide/build-and-install.html). I've cloned this repository: https://github.com/hyperledger/iroha/tree/iroha2-stable
Everything works fine until it comes to running the docker-compose up command. I get two issues:
Telemetry does not start correctly
iroha-iroha3-1 | 2022-12-23T19:46:49.464546Z ERROR iroha: Telemetry did not start
It fails to deserialise the raw genesis block, which leads to the peer exiting
iroha-iroha0-1 | 2022-12-23T19:46:49.951644Z INFO iroha: Iroha version cargo_pkg_version="2.0.0-pre-rc.9" git_sha="28e4a4d088cda046410cf93f30dae3a925b3d82e"
iroha-iroha0-1 | Error: Failed to deserialize raw genesis block from "/config/genesis.json"
iroha-iroha0-1 |
iroha-iroha0-1 | Caused by:
iroha-iroha0-1 | data did not match any variant of untagged enum DeserializeHelper at line 246 column 9
iroha-iroha0-1 |
iroha-iroha0-1 | Location:
iroha-iroha0-1 | /iroha/core/src/genesis.rs:254:41
I'm a bit lost as to what the issue is, or even where to find this "/config/genesis.json" file as its within the docker image.
Any help would be much appreciated
Related
I am using Neo4j CE .. there are system and neo4j databases.
-bash-4.2$ neo4j --version
neo4j 4.3.2
I am facing startup the NEO4J database..
neo4j#system> show databases;
| name | address | role | requestedStatus | currentStatus | error | default | home |
| "neo4j" | "awsneodevldb01.est1933.com:7687" | "standalone" | "online" | "offline" | "An error occurred! Unable to start DatabaseId{483e7f9b[neo4j]}." | TRUE | TRUE |
how can I find out why unable to start NEO4J
how to remove it and create an empty ENO4J
I recreate an empty ENO4J on CE. .. by ref. other post on this stackoverFlow.
stop neo4j
remove /var/lib/neo4j/data/databases/neo4j
remove /var/lib/neo4j/data/transactions/neo4j
start neo4j
You have to remove the neo4j directory under transactions too.
ref. you can find out why the neo4j database could not startup from following log file.
/var/log/neo4j/debug.log
Caused by: java.lang.RuntimeException: Fail to start 'DatabaseId{483e7f9b[neo4j]}' since transaction logs were found, while database files are missing.
I am running WSL Ubuntu 20.04 (Version 2 with Docker Desktop Support) within Windows 10 Pro Version 21H1
The steps are as follows:
git clone https://github.com/textileio/powergate.git
cd powergate/
cd docker/
nano docker-compose.yaml where I added "["lotus", "daemon", "--import-snapshot", "https://fil-chain-snapshots-fallback.s3.amazonaws.com/mainnet/minimal_finality_stateroots_latest.car"]" between lines 32 and 33.
make up
Waited for the node to finish importing and then syncing.
^C then make down then deleted the line "["lotus", "daemon", "--import-snapshot", "https://fil-chain-snapshots-fallback.s3.amazonaws.com/mainnet/minimal_finality_stateroots_latest.car"]" from docker-compose.yaml
make up
Now that the node was running I typed cd .. so I was in the repo's root directory, then make install-pow
with the pow command in my GOPATH I typed pow to make sure pow was linked fine to powd. It was.
pow admin users create
copied the token and ran export POW_TOKEN=<token copied to here>
Then pow wallet addrs and funded the address
I went to the directory behind the folder of my static website which is about 5GB in size.
I typed pow data stage <my-static-site-folder>
After it was finished staging and printed out the CID I typed pow config apply --watch <CID waited a long time while it said the job was executing and then I got...
---------------------------------------+--------------------------------+-------+-------+--------------
<job id here> | JOB_STATUS_FAILED executing | | |
| cold-storage config: making | | |
| deal configs: getting miners | | |
| from minerselector: getting | | |
| miners from reputation | | |
| module: not enough miners from | | |
| reputation module to satisfy | | |
| the constraints | | |
I don't understand what the problem is. I repeated the pow config apply --watch <CID command each time adding the --override flag with several different modifications to a custom config file. The content did appear briefly on IPFS (not Filecoin), but after I continued running the config apply command the site went down from IPFS.
This problem can be fixed by adding miners to the "trustedMiner" entry in the config file because pow doesn't necessary detect miners that fit your specs.
I went to a Filecoin miner info aggregation site (I used "https://filrep.io/") and added miners to the trustedMiner section of the config file used in the apply command to start a Filecoin deal.
For example the "trustedMiners" line in your config file should look like this:
"trustedMiners": ["<Miner Id>", "<Miner Id>","<Miner Id>", "<Miner Id>", ...],
with however many miners you want to add.
Then you would execute the command:
pow config apply --watch <CID> -o -c new-config-file.json
Btw the --watch flag is optional as it just allows you to see the status of the deal in real time.
I'm currently developing a Fiware based network, in which I have devices that report via HTTP and JSON over MQTT and work fine. The network has increased and now I need to attach some devices that use the UltraLight protocol. Doing so, I encountered some troubles.
I followed (to the best of my knowledge) the official documentation provided by the FIWARE Foundation in different sites (the official github repo, readthedocs, and so on). I tried installing the new agent on the same machine as the JSON agent, and it didn't work -more on that later-. In order to discard any conflicts, I used another VM in which -over Docker this time- I deployed a new instance of Orion CB, Mosquitto CB, mongo and the agent; a new complete stack of Fiware, basically.
After everything was deployed, I created a new service group on the agent via the rest API (POST /iot/devices), gave it an api key and the CB address. In this step I left the resource field empty, because I don´t really know what role does it play in the whole system. The response was 201, as expected.
The next step was to provision a device, which I did by POSTing to agent/iot/devices with the attributes I wanted, and the api key mentioned in last paragraph. Once again, the response was 201.
The problem arises when I try to publish a new measurement using mosquitto_pub. The command runs smoothly but the entity in Orion does not get updated. Accessing Orions DB (mongo) I can check that the entity was created successfully but it has an empty value. Moreover, checking the logs yields the following:
mosquitto | 1559157902: New connection from 10.150.150.173 on port 1883.
mosquitto | 1559157902: New client connected from 10.150.150.173 as mosqpub|28750-mqtt (p1, c1, k60).
fiware-iot-agent | time=2019-05-29T19:25:02.374Z | lvl=DEBUG | corr=2c8aa6e3-faab-4166-9e20-0b362c165939 | trans=2c8aa6e3-faab-4166-9e20-0b362c165939 | op=IoTAgentNGSI.MongoDBGroupRegister arams ["resource","apikey"] with queryObj {"resource":"/iot/d","apikey":"apikeymia"} | comp=IoTAgent
fiware-iot-agent | time=2019-05-29T19:25:02.381Z | lvl=DEBUG | corr=2c8aa6e3-faab-4166-9e20-0b362c165939 | trans=2c8aa6e3-faab-4166-9e20-0b362c165939 | op=IoTAgentNGSI.MongoDBGroupRegister elds [["resource","apikey"]] not found: [{"resource":"/iot/d","apikey":"apikeymia"}] | comp=IoTAgent
fiware-iot-agent | time=2019-05-29T19:25:02.382Z | lvl=ERROR | corr=2c8aa6e3-faab-4166-9e20-0b362c165939 | trans=2c8aa6e3-faab-4166-9e20-0b362c165939 | op=IOTAUL.Common.Binding | srv=n/a | essing device measures [/apikeymia/motion003/attrs] | comp=IoTAgent
mosquitto | 1559157902: Client mosqpub|28750-mqtt disconnected.
after each try of publishing a new meassurment.
Any help would be appreciated
In this step I left the resource field empty, because I don´t really know what role does it play in the whole system
Looking to the logs:
[...] queryObj {"resource":"/iot/d","apikey":"apikeymia"} | comp=IoTAgent
[...] not found: [{"resource":"/iot/d","apikey":"apikeymia"}] | comp=IoTAgent
I'd suggest using "/iot/d" instead of empty for the resource field. Maybe that could solve the problem.
When I use the command docker-compose up in the directory that has the Docker compose Dockerfile, I am hit with the error below. The error was that in acp times app there was an extra brace, which I removed. When I try to run the container again I get the same error message, why is this?
I am new to docker, if any additional info is needed to help solve the problem let me know, I'm not even sure what I am looking for, I followed the docker docks simple instructions. Could it be that something else in my python code is incorrect?
Attaching to brevets_web_1
web_1 | Traceback (most recent call last):
web_1 | File "flask_brevets.py", line 10, in <module>
web_1 | import acp_times # Brevet time calculations
web_1 | File "/app/acp_times.py", line 18
web_1 | minTable = [(1300,26), (1000,13.333)), (600, 11.428),
(400, 15), (200, 15)]
web_1 | ^
web_1 | SyntaxError: invalid syntax
brevets_web_1 exited with code 1
Like #avigil said you need to rebuild your image in order to update it.
If you want to do it in one command you can type:
docker-compose up --build
If you really want to be sure that your containers are recreated run the following command:
docker-compose up --build --force-recreate
I've been playing with a Spring Cloud app consisting of a config server, a discovery server (Eureka) and a Feign client with Ribbon (internally used by Feign). I've 2 services, a movie-service and a daily-update-service. The intent is to provide a daily update of popular movies, news and weather in one place.
Problem I'm having is that the movie-service Feign client is not able to find it from daily-update-service. It errors out with the following:
Caused by: java.lang.RuntimeException: com.netflix.client.ClientException: Load balancer does not have available server for client: movie-service
daily_update_service_1 | at org.springframework.cloud.netflix.feign.ribbon.LoadBalancerFeignClient.execute(LoadBalancerFeignClient.java:59) ~[spring-cloud-netflix-core-1.1.0.M4.jar:1.1.0.M4]
daily_update_service_1 | at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:95) ~[feign-core-8.12.1.jar:8.12.1]
daily_update_service_1 | at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:74) ~[feign-core-8.12.1.jar:8.12.1]
daily_update_service_1 | at feign.hystrix.HystrixInvocationHandler$1.run(HystrixInvocationHandler.java:54) ~[feign-hystrix-8.12.1.jar:8.12.1]
daily_update_service_1 | at com.netflix.hystrix.HystrixCommand$1.call(HystrixCommand.java:294) ~[hystrix-core-1.4.21.jar:1.4.21]
daily_update_service_1 | ... 21 common frames omitted
daily_update_service_1 | Caused by: com.netflix.client.ClientException: Load balancer does not have available server for client: movie-service
daily_update_service_1 | at com.netflix.loadbalancer.LoadBalancerContext.getServerFromLoadBalancer(LoadBalancerContext.java:468) ~[ribbon-loadbalancer-2.1.0.jar:2.1.0]
daily_update_service_1 | at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:184) ~[ribbon-loadbalancer-2.1.0.jar:2.1.0]
daily_update_service_1 | at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:180) ~[ribbon-loadbalancer-2.1.0.jar:2.1.0]
My debugging so far shows that the DomainExtractingServerList is trying to do a look up by VIP, which's movie-service and coming up with no servers. The services are registered in Eureka and I can see them on the Eureka dashboard.
I'm not sure what pieces of the code are relevant so I'm posting a link to the Github project. Assuming you've Docker and Docker Compose installed, the easiest way to get it up and running is to clone the project and then follow the following instructions. These instructions are for a Mac/Linux OS, adapt them if necessary to Windows. I'll provide specific code snippets if someone wants to see it here instead of looking in the code.
cd daily-update-microservices.
Replace all occurences of my docker host IP with yours. You can use this command: grep -rl '192.168.99.107' . | xargs perl -pi -e "s/192\.168\.99\.107/$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')/"
Run ./gradlew clean buildDockerImage
Run docker-compose -f daily-update-service/docker-compose.yml up.
Once the services come up, do a curl -v http://$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}'):10000/dailyupdate/movies/popular
Upon further investigation, I found that if eureka.client.fetchRegistry is false, the various shuffle methods in com.netflix.discovery.shared.Applications are not called and hence Applications.shuffleVirtualHostNameMap is never populated. This map is used later for look up in the method Applications.getInstancesByVirtualHostName that then fails.
I don't understand why a client would be forced to download the registry. They may choose to make the network trip each time or get delta when necessary.
I've opened an issue on Github for this. Will wait for their response.