Why does local RSK node show "invalid address" for my smart contract, even though I can see it in the block explorer? - checksum

I'm running RSKj with the following command,
to connect to RSK Testnet:
java -cp rskj-core-2.2.0-PAPYRUS-all.jar -Drpc.providers.web.cors=* co.rsk.Start --testnet
However, when I query an address,
I get the following error:
Invalid address
The checksum is invalid for the current network: RSK Testnet (31)
(The address is a smart contract, for what it is worth)
I have verified that this address exists in the RSK Block Explorer.
So what's causing this to happen?

If you are connecting to RSKj running locally connected to the RSK Testnet, and an address does not exist;
but that same address does exist on the RSK public nodes;
then you should check that your local node is fully synchronised.
To do this:
(1) Find out what the block number is on the public node:
curl \
-X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
https://public-node.testnet.rsk.co/
This yields a large value of 0x1bfe36 which is 1834550.
{"jsonrpc":"2.0","id":1,"result":"0x1bfe36"}
(2) Find out what the block number is on your local node:
curl \
-X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
http://localhost:4444/
If you have a fresh start on your local RSKj,
and it has not been running for long,
you might get a small number in the result,
such as 0x3fff, which is 16383.
{"jsonrpc":"2.0","id":1,"result":"0x3fff"}
(3) Compare the two values
If these commands are run within a short period of time of each other,
they should have the same value,
and that would mean that your local node is fully synchronised.
However, as seen in the example above,
the local node had a block number whose value was less than the public node.
This means that it has not been fully synchronised.
If the smart contract at this particular address was deployed
after this particular number,
you will get the above error,
because nothing exists at that address.
The solution is to wait until your local node is
fully synchronised with the rest of the network (including the public node);
before running your query again.

Related

How to avoid "PeerDiscoveryException" when running RSK node on Windows?

I started running RSK node on Windows and when I tried:
curl -X POST -H "Content-Type:application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' http://localhost:4444
I always get:
{"jsonrpc":"2.0","id":1,"result":"0x0"}
This obviously means that my node is not syncing,
so I checked the logs and found Address already in use:
Exception in thread "UDPServer" co.rsk.net.discovery.
PeerDiscoveryException: Discovery can't be started.
At co.rsk.net.discovery.UDPServer$1.run(UDPServer.java:65) - caused by: java.net.BindException: Address already in use...
I do not have any other RSK instance running,
so I'm not sure why I'm getting this error.
You need to change your peer discovery port (peer.port)
to use a different one.
This is because RSK Mainnet uses 5050 as default peer discovery port
and Windows usually has smaller port numbers already assigned for other uses.
For example, to start RSKj with a peer discovery port of 50506,
use the following command:
java \
-D peer.port=50506 \
-cp <PATH-TO-THE-RSKJ-JAR> \
co.rsk.Start \
--regtest
You may also choose to set peer.port=50506 in the relevant config file.
NOTE: This issue does not usually occur on RSK Testnet
because its default peer discovery port is 50505,
a much larger port number.
This issue usually does not occur on other operating systems
because that RSK Mainnet port number is typically unused.

Unable to read Pub/Sub messages from local emulator in Apache beam

I am trying to run a simple Apache Beam pipeline with the DirectRunner that reads from a Pub/Sub subscription and writes the messages to disk.
The pipeline works fine when I run it against GCP, however when I try to run it against my local Pub/Sub emulator, it doesn't seem to be doing anything.
I am using a custom Options class that extends the org.apache.beam.sdk.io.gcp.pubsub.PubsubOptions class.
public interface Options extends PubsubOptions {
#Description("Pub/Sub subscription to read the input from")
#Required
ValueProvider<String> getInputSubscription();
void setInputSubscription(ValueProvider<String> valueProvider);
}
The pipeline is quite simple
pipeline
.apply("Read Pub/Sub Messages", PubsubIO.readMessagesWithAttributes()
.fromSubscription(options.getInputSubscription()))
.apply("Add a fixed window", Window.into(FixedWindows.of(Duration.standardSeconds(WINDOW_SIZE))))
.apply("Convert Pub/Sub To String", new PubSubMessageToString())
.apply("Write Pub/Sub messages to local disk", new WriteOneFilePerWindow());
The pipeline is executed with the following options
mvn compile exec:java \
-Dexec.mainClass=DefaultPipeline \
-Dexec.cleanupDaemonThreads=false \
-Dexec.args=" \
--project=my-project \
--inputSubscription=projects/my-project/subscriptions/my-subscription \
--pubsubRootUrl=http://127.0.0.1:8681 \
--runner=DirectRunner"
I am using this Pub/Sub emulator docker image and executing it with the following command:
docker run --rm -ti -p 8681:8681 -e PUBSUB_PROJECT1=my-project,topic:my-subscription marcelcorso/gcloud-pubsub-emulator:latest
Is there more configuration required to make this work?
Turns out that an Apache Beam pipeline is unable to read from a local Pub/Sub emulator if you have GOOGLE_APPLICATION_CREDENTIALS environment variable set.
Once I removed this environment variable which was pointing to a GCP service account, the pipeline worked seamlessly with the local Pub/Sub emulator.
You can troubleshoot the local emulator by issuing manual HTTP requests to it (via curl), like so:
$ curl -d '{"messages": [{"data": "c3Vwc3VwCg=="}]}' -H "Content-Type: application/json" -X POST localhost:8681/v1/projects/my-project/topics/topic:publish
{
"messageIds": ["5"]
}
$
$ curl -d '{"returnImmediately":true, "maxMessages":1}' -H "Content-Type: application/json" -X POST localhost:8681/v1/projects/my-project/subscriptions/my-subscription:pull
{
"receivedMessages": [{
"ackId": "projects/my-project/subscriptions/my-subscription:9",
"message": {
"data": "c3Vwc3VwCg==",
"messageId": "5",
"publishTime": "2019-04-30T17:26:09Z"
}
}]
}
$
Or by pointing the gcloud command-line tool at it:
$ CLOUDSDK_API_ENDPOINT_OVERRIDES_PUBSUB=localhost:8681 gcloud pubsub topics list
Also, note that when the emulator comes up, it creates the topic and subscription from scratch, so there are no messages on them. If your pipeline expects to immediately pull messages on the subscription, that would explain why it seems “stuck”. Note that when you run the pipeline at GCP, the topic and subscription you use there may already have messages on them.

FORBIDDEN/12/index read-only / allow delete (api) problem

When importing items into my Rails app I keep getting the above error being raised by SearchKick on behalf of Elasticsearch.
I'm running Elasticsearch in a Docker. I start my app by running docker-compose up. I've tried running the command recommended above but i just get "No such file or directory" returned. Any ideas?
I do have port 9200 exposed to outside but nothing seems to help. Any ideas?
Indeed, running curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' as suggested by #Nishant Saini resolves the very similar issue I ran just into.
I hit disk watermarks limits on my machine.
Use the following command in linux:
curl -s -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_all/_settings?pretty' -d ' {
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}'
the same command in Kibana's DEV TOOL format :
PUT _all/_settings
{
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}

F5 BIG IP update VIP using REST API cause code:400, message:0107028c:3

I am trying to call my F5 Big IP REST API to update some VIP configurations, for example I want to update the VIP description using this command:
curl -s -k --tlsv1.2 -u admin:password -H "Content-Type: application/json" -X PUT https://ManagmentIP/mgmt/tm/ltm/virtual/~MyPool~MyVIP_887 {"description":"THIS IS JUST A TEST"}
I am getting this error:
{"code":400,"message":"0107028c:3: The source (::%10) and destination (10.62.185.3%10) addresses for virtual server (/MyPool/MyVIP_887) must be be the same type (IPv4 or IPv6).","errorStack":[],"apiError":3}
My F5 Big IP version: BIG-IP 12.1.3 Build 0.0.378 Final
Am I missing something?
The answer is taken from F5 DevCentral:
You have to use -d 'data' = The JSON data to send. Note that you need to quote the entire json blob, and each "name":"value" pairs must be quoted. When you have nested quotes, make sure you escape () them.
Refer the cookbook if it helps.
So something like,
curl -sku admin -H "Content-Type: application/json" -X PATCH
https:///mgmt/tm/ltm/virtual/ -d
'{"description": "Hello World!"}'

Distributed RabbitMQ Nodes don't recognize each other

I'm working on a RabbitMQ distributed POC and I'm stuck at the basics of clustering the nodes.
I'm trying to follow the rabbit's tutorial on clustering so this is my reference.
After installing erlang (R14B04) and rabbit (2.8.2-1) I've copied the .erlang.cookie file contents from one node to the other two.
I wasn't sure about how to get erlang to notice this change to I had to restart the machines themselves (pretty brute force but I don't know erlang at all).
In addtion I opened in iptables 4369 and 5 additional ports for communications and placed under
/usr/lib64/erlang/bin/sys.config the following config:
{kernel,[{inet_dist_listen_min, XX00},{inet_dist_listen_max,XX05}]}]
Then another restart (dumb I know) to verify erlang takes these into consideration but still when I run:
rabbitmqctl cluster rabbit#HostName1
I get:
Clustering node rabbit#HostName2 with [rabbit#HostName1] ...
Error: {no_running_cluster_nodes,[rabbit#HostName1],
[rabbit#HostName1]}
There is a chance my fiddling with the erlang.cookie or with the ports did not succeed but I don't know how to check them. I tried typing erl in the cmd and then erl_epmd:names() or other commands to get more information but I'm probably way off in erlang land.
Would truly appreciate any help
Update:
I tried pinging two erlang nodes manually and got pang back.
I did the following:
Connected to two nodes, stopped rabbitmq (wasn't sure if needed but to be sure), started erlang like so (erl -sname dilbert and erl -sname dilbert2) when the erlang command line started i ran node(). on each of them and got dilbert#HostName1 and dilbert2#HostName2 respectively. I then tried to run net_adm:ping('dilbert'). and net_adm:ping('dilbert#HostName1'). with the single quote and without them from both nodes (changed names of course) and got on all 8 cases pang.
When I ran nodes(). on one of the machines I got back an empty array.
I've also tried to allow all traffic in the firewall (script) and then try to run the above commands (don't worry they're back on now) and still got back pang.
Update2:
For some reason I had cookies mismatch which I needed to resolve (thanks #kjw0188 for the suggestion [I ran erlang:get_cookie(). in the erlang command line]).
This did not help and I needed to stop iptables completely (not sure why but I'll figure it soon) and load the erlang node with -name dilbert#my-ip because my rackspace servers have no dns-name. This finally enabled me to get a pong and see the nodes see each other (nodes(). returns a non-empty array after the ping).
The problem I'm facing now is how to instruct RabbitMQ to use -name instead of -sname when starting erlang.
So I had multiple issues with connecting my two RabbitMQ nodes-
I'll add that my nodes are hosted on rackspace, and so don't have a default exposable hostname, and require iptables since there is no DMZ or built in security group concept like amazon.
Problems:
1. Cookie- Not sure how or why but I had multiple instances of .erlang.cookie (in /root, in my home directory and in /var/lib/rabbitmq/) I kept only the one in rabbitmq and verified all nodes have the same cookie.
2. IPTables- In order for the nodes to communicate I needed to open the epmd port and the range of ports for the actual communication inet_dist_listen_min inet_dist_listen_max.
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${epmd} -s ${otherNode} -j ACCEPT
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${inet_dist_listen_min}:${inet_dist_listen_max} -s ${otherNode} -j ACCEPT
empd is the usuall 4369 port and for the other range use whatever range you want.
${otherNode} is the ip of my other node.
I also needed to configure erlang through rabbitmq to use these ports (see config file at end)
3. HostName- Seeing as I don't have a hostname I needed to edit the rabbit scripts to use -name and not -sname (the first tells erlang to take the whole name, the latter stands for short name and thus appends an # symbol and the hostname).
This was accomplished by editing:
/usr/lib/rabbitmq/bin/rabbitmqctl
Added at the beginning the definition of the RABBITMQ_NODE_IP_ADDRESS property
DEFAULT_NODE_IP_ADDRESS=auto
DEFAULT_NODE_PORT=5672
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && RABBITMQ_NODE_IP_ADDRESS=${NODE_IP_ADDRESS}
[ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${NODE_PORT}
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" != "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_IP_ADDRESS=${DEFAULT_NODE_IP_ADDRESS}
[ "x" != "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${DEFAULT_NODE_PORT}
and in the actual erl command I changed
-sname ${RABBITMQ_NODENAME} \ to
-name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\.
This made rabbitmq listen only on the specified ip address (specified in the config file at the end) and load with that ip instead of the usuall hostname.
edited /usr/lib/rabbitmq/bin/rabbitmq-server
Changed the actual erl command from -sname ${RABBITMQ_NODENAME} \ to -name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\
Added a rabbit conf (/etc/rabbitmq/rabbitmq-env.conf) file with-
#the ip address which rabbit should use, this is to limit rabbit to only use internal rackspace communication and not publicly accessible ports
NODE_IP_ADDRESS=myIpAdress
#had to change the nodename becaue otherwise rabbitmq used rabbit#Hostname and not only rabbit
NODENAME=myCompany
#This instructed rabbit to instruct erlang which ports it should use for its communications with other nodes
export SERVER_ERL_ARGS="$SERVER_ERL_ARGS -kernel inet_dist_listen_min somePort -kernel inet_dist_listen_max someOtherBiggerPort"
Some resources which helped me along the way:
RabbitMQ Clustering Guide
Clustering RabbitMQ servers for High Availability
rabbitmq-env.conf(5) manual page
Node communication by public IP address erlang mailing list (The middle post)
Configuring RabbitMQ Cluster on Cloud
Hope this will help anyone else.
EDIT:
Not sure how I was mistaken but it seemed my erlang-rabbit port instructions were not taken into consideration or were not enough. Ended up having to allow all communications between the two nodes...
One thing to really watch out for is whitespace of any kind in the erlang cookie file, especially line breaks AFTER the contents of the cookie. So long as both are identical, things are okay, but when one has a line break and the other doesn't, thing won't work.
Background: I was facing the same issue while setting up Rabbitmq cluster. I was using 2 docker containers running on my host-machine, which is equivalent to 2 separate nodes and I could not create a cluster of these two.
Solution: 1. Make sure you have same erlang cookie on all your cluster nodes, the default location is /var/lib/rabbitmq/.erlang.cookie. This file is used for authentication, so make sure, you have it same on all the nodes. After changing the .erlang.cookie restart your rabbitmq service.
Make sure that nodes are accessible from one other, use ping or telnet to check the connection.
Check that /etc/hosts have correct entries, for example if rabbit2 wants to join cluster rabbit1, /etc/hosts of rabbit2 should contain.
172.68.1.6 rabbit1
172.68.1.7 rabbit2
Now stop service using $rabbitmqctl stop_app followed by $rabbitmqctl join_cluster rabbit#rabbit1, start your service by rabbitmqctl start_app and check $rabbitmqctl cluster_status to see weather you have joined the cluster or not.
I followed the rabbitmq official documentation to setup the cluster.
to change RabbitMQ sname/name behaviour you can edit the scripts:
rabbitmq-multi
rabbitmq-server
rabbitmqctl
Example
In script rabbitmqctl there is the following piece of code:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-sname rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
You have to change it in:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-name rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
http://pearlin.info/?p=1672
so you need to copy the cookie from the node you trying to connect
example :- rabbit#node1
rabbit#node2
go to rabbit#node1 and copy the cookie from cat /var/lib/rabbitmq/.erlang.cookie
go to rabbit#node2 remove the current cookie and paste the new one.
on same node
/usr/sbin/rabbitmqctl stop_app
/usr/sbin/rabbitmqctl reset
/usr/sbin/rabbitmqctl cluster rabbit#node1
should do it.
same documented here.
http://pearlin.info/?p=1672

Resources