Crossbar says 'no callee registered for procedure' - beagleboneblack

Despite having two nodes connected to a crossbar router on my local network, they aren't seeing each other and the browser console is saying:
Browser console:
Potentially unhandled rejection [1] {"error":"wamp.error.no_such_procedure","args":["no callee registered for procedure <com.at.mcu.e6a003528063>"],"kwargs":{}} (WARNING: non-Error used)
Crossbar version:
Crossbar.io : 17.11.1 (Crossbar.io COMMUNITY)
Autobahn : 17.10.1 (with JSON, MessagePack, CBOR, UBJSON)
Twisted : 17.9.0-EPollReactor
LMDB : 0.93/lmdb-0.9.18
Python : 2.7.13/CPython
OS : Linux-4.4.91-ti-r133-armv7l-with-debian-9.2
Machine : armv7l
Logs from crossbar start - you can see the backend python script and browser have joined at the bottom:
# crossbar start
(cb) root#beaglebone:/home/debian# crossbar start
2017-12-29T03:24:30+0000 [Controller 14401] __ __ __ __ __ __ __ __
2017-12-29T03:24:30+0000 [Controller 14401] / `|__)/ \/__`/__`|__) /\ |__) |/ \
2017-12-29T03:24:30+0000 [Controller 14401] \__,| \\__/.__/.__/|__)/~~\| \. |\__/
2017-12-29T03:24:30+0000 [Controller 14401]
2017-12-29T03:24:30+0000 [Controller 14401] Version: Crossbar.io COMMUNITY 17.11.1
2017-12-29T03:24:30+0000 [Controller 14401] Public Key: 0cca8ca2a08252c53be3eb8a43d85976e89bf2a7ccfd813cda115d2436495fc8
2017-12-29T03:24:30+0000 [Controller 14401]
2017-12-29T03:24:30+0000 [Controller 14401] Node starting with personality "community" [crossbar.controller.node.Node]
2017-12-29T03:24:30+0000 [Controller 14401] Running from node directory "/home/debian/.crossbar"
2017-12-29T03:24:30+0000 [Controller 14401] Node configuration loaded from "/home/debian/.crossbar/config.json"
2017-12-29T03:24:30+0000 [Controller 14401] Controller process starting [CPython-EPollReactor] ..
2017-12-29T03:24:30+0000 [Controller 14401] No extra node router roles
2017-12-29T03:24:30+0000 [Controller 14401] RouterServiceSession ready [no on_ready configured]
2017-12-29T03:24:30+0000 [Controller 14401] Registered 20 procedures
2017-12-29T03:24:30+0000 [Controller 14401] Using default node shutdown triggers [u'shutdown_on_worker_exit']
2017-12-29T03:24:30+0000 [Controller 14401] Configuring node from local configuration ...
2017-12-29T03:24:30+0000 [Controller 14401] Starting 1 workers ...
2017-12-29T03:24:30+0000 [Controller 14401] Router worker "worker-001" starting ..
2017-12-29T03:24:38+0000 [Router 14407] Started Router worker "worker-001" on node "None" [crossbar.worker.router.RouterWorkerSession / CPython-EPollReactor]
2017-12-29T03:24:38+0000 [Router 14407] Router worker "worker-001" session 3343461634770564 initializing ..
2017-12-29T03:24:38+0000 [Router 14407] Registered 37 procedures
2017-12-29T03:24:38+0000 [Router 14407] Router worker "worker-001" session ready
2017-12-29T03:24:38+0000 [Controller 14401] Router worker "worker-001" process 14407 started
2017-12-29T03:24:38+0000 [Router 14407] RouterServiceSession ready [configured on_ready fired]
2017-12-29T03:24:38+0000 [Router 14407] Realm 'at-realm' started
2017-12-29T03:24:38+0000 [Controller 14401] Router "worker-001": realm 'realm-001' (named 'at-realm') started
2017-12-29T03:24:38+0000 [Router 14407] role role-001 on realm realm-001 started
2017-12-29T03:24:38+0000 [Controller 14401] Router "worker-001": role 'role-001' (named 'anonymous') started on realm 'realm-001'
2017-12-29T03:24:38+0000 [Router 14407] UniSocketServerFactory starting on 8080
2017-12-29T03:24:38+0000 [Controller 14401] Router "worker-001": transport 'transport-001' started
2017-12-29T03:24:38+0000 [Controller 14401] Local node configuration applied successfully!
2017-12-29T03:24:40+0000 [Router 14407] session "2793196082778339" joined realm "at-realm"
2017-12-29T03:24:41+0000 [Router 14407] session "5981437596467214" joined realm "at-realm"
I have:
Confirmed both are trying to join the same URI: com.at.mcu.e6a003528063.
Found that identical browser and python code connect correctly when placed on another machine (a BeagleBone Black Wireless running a slightly different version of Debian). For this reason, I think it has something to do with crossbar's installation and/or configuration.
Recreated crossbar from scratch in a new virtualenv on the working machine (it works) and the non-working machine (doesn't work).
Confirmed that .crossbar/config.json is identical.

Related

Unable to connect to redis master with tls from go app using go-redis pkg

For the past two weeks I am trying to set up (locally) redis and my go microservice to communicate using TLS. I use docker compose to up all of the needed containers - redis master, redis slave, redis sentinel and go application that uses go-redis package. All of the redis services are equipped with the needed certificates - root ca, service cs, service private key. I also turned off mutual TLS (mTLS) as redis uses it by default (--tls-auth-clients no). I run docker-compose and all of the redis services discover each other and establish TLS connection without any problems. When I connect to the go application container and run redis-cli inside it (redis-cli -h master -p 6379 --tls) I successfully connect to the master using TLS and I am able to execute commands without any problems.
The problem I face comes when I start the go application and try to connect to redis master (in this specific case the command is set key). All of the three redis services use the same TLS version: --tls-protocols TLSv1.2 so I added it to the go app config. Here is the redis client I create:
redisClient = redis.NewFailoverClient(&redis.FailoverOptions{
SentinelAddrs: adrs,
MasterName: redisMaster,
Password: password,
DB: db,
OnConnect: func(ctx context.Context, conn *redis.Conn) error {
// logging ...
// redis: Successfully connected to Redis
return nil
},
TLSConfig: &tls.Config{
//InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
},
When the application is started I receive the following logs (not sure why I receive several success messages):
{"connection":"Redis\u003csentinel:26379 db:0\u003e","level":"info","msg":"redis: Successfully connected to Redis"}
{"connection":"Redis\u003csentinel:26379 db:0\u003e","level":"info","msg":"redis: Successfully connected to Redis"}
redis: 2022/04/03 18:09:53 sentinel.go:643: sentinel: new master="mymaster" addr="redis-master:6379"
{"connection":"Redis\u003cFailoverClient db:0\u003e","level":"info","msg":"redis: Successfully connected to Redis"}
{"connection":"Redis\u003cFailoverClient db:0\u003e","level":"info","msg":"redis: Successfully connected to Redis"}
{"connection":"Redis\u003cFailoverClient db:0\u003e","level":"info","msg":"redis: Successfully connected to Redis"}
{"connection":"Redis\u003cFailoverClient db:0\u003e","level":"info","msg":"redis: Successfully connected to Redis"}
{"error":"read tcp 1.2.3.4:50516-\u003e5.6.7.8:6379: read: connection reset by peer","level":"error","master_name":"mymaster","msg":"redis: failed to connect to Redis"}
Docker-compose logs for redis-master:
master_1 | 1:M 03 Apr 2022 18:09:53.419 # Error accepting a client connection: error:1408F10B:SSL routines:ssl3_get_record:wrong version number (conn: fd=11)
master_1 | 1:M 03 Apr 2022 18:09:53.444 # Error accepting a client connection: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
master_1 | 1:M 03 Apr 2022 18:09:53.456 # Error accepting a client connection: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
master_1 | 1:M 03 Apr 2022 18:09:53.507 # Error accepting a client connection: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
I have no idea why redis-cli connects without any problems to the master and the application fails when they are in the same container. Also I dont know why the above error says SSL routines:ssl3_get_record:wrong version number.
I will appreciate any help!
Thanks in advance!

Connection timeout using local kafka-connect cluster to connect on a remote database

I'm trying to run a local kafka-connect cluster using docker-compose.
I need to connect on a remote database and i'm also using a remote kafka and schema-registry.
I have enabled access to these remotes resources from my machine.
To start the cluster, on my project folder in my Ubuntu WSL2 terminal, i'm running
docker build -t my-connect:1.0.0
docker-compose up
The application runs successfully, but when I try to create a new connector, returns error 500 with timeout.
My Dockerfile
FROM confluentinc/cp-kafka-connect-base:5.5.0
RUN cat /etc/confluent/docker/log4j.properties.template
ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"
ARG JDBC_DRIVER_DIR=/usr/share/java/kafka/
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:5.5.0 \
&& confluent-hub install --no-prompt confluentinc/connect-transforms:1.3.2
ADD java/kafka-connect-jdbc /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
COPY java/kafka-connect-jdbc/ojdbc8.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
ENTRYPOINT ["sh","-c","export CONNECT_REST_ADVERTISED_HOST_NAME=$(hostname -I);/etc/confluent/docker/run"]
My docker-compose.yaml
services:
connect:
image: my-connect:1.0.0
ports:
- 8083:8083
environment:
- CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
- CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
- CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http=//schema-registry:8081
- CONNECT_BOOTSTRAP_SERVERS=broker1.intranet:9092
- CONNECT_GROUP_ID=kafka-connect
- CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
- CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_OFFSET_STORAGE_TOPIC=kafka-connect.offset
- CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect.config
- CONNECT_STATUS_STORAGE_TOPIC=kafka-connect.status
- CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY=All
- CONNECT_LOG4J_ROOT_LOGLEVEL=INFO
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- CONNECT_REST_ADVERTISED_HOST_NAME=localhost
My cluster it's up
~$ curl -X GET http://localhost:8083/
{"version":"5.5.0-ccs","commit":"606822a624024828","kafka_cluster_id":"OcXKHO7eT4m9NBHln6ACKg"}
Connector call
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d
{
"name": "my-connector",
"config":
{
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max": "1",
"database.user": "user",
"database.password": "pass",
"database.dbname":"SID",
"database.schema":"schema",
"database.server.name": "dbname",
"schema.include.list": "schema",
"database.connection.adapter":"logminer",
"database.hostname":"databasehost",
"database.port":"1521"
}
}
Error
{"error_code": 500,"message": "IO Error trying to forward REST request: java.net.SocketTimeoutException: Connect Timeout"}
## LOG
connect_1 | [2021-07-01 19:08:50,481] INFO Database Version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
connect_1 | Version 19.4.0.0.0 (io.debezium.connector.oracle.OracleConnection)
connect_1 | [2021-07-01 19:08:50,628] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection)
connect_1 | [2021-07-01 19:08:50,643] INFO AbstractConfig values:
connect_1 | (org.apache.kafka.common.config.AbstractConfig)
connect_1 | [2021-07-01 19:09:05,722] ERROR IO error forwarding REST request: (org.apache.kafka.connect.runtime.rest.RestClient)
connect_1 | java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Connect Timeout
Testing the connection to the database
$ telnet databasehostname 1521
Trying <ip>... Connected to databasehostname
Testing connection to kafka broker
$ telnet broker1.intranet 9092
Trying <ip>... Connected to broker1.intranet
Testing connection to remote schema-registry
$ telnet schema-registry.intranet 8081
Trying <ip>... Connected to schema-registry.intranet
What am I doing wrong? Do I need to configure something else to allow connection to this remote database?
You need to set correctly rest.advertised.host.name (or CONNECT_REST_ADVERTISED_HOST_NAME, if you’re using Docker).
This is how a Connect worker communicates with other workers in the cluster.
For more details see Common mistakes made when configuring multiple Kafka Connect workers by Robin Moffatt.
In your case try to remove CONNECT_REST_ADVERTISED_HOST_NAME=localhost from compose file.

Port error when setting up Dev mode of Hyperledger Fabric

I'm setting up the development environment following the instructions on Hyperledger fabric's official website:
https://hyperledger-fabric.readthedocs.io/en/latest/peer-chaincode-devmode.html
I have started the orderer successfully using:
ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer
This command didn't work at first but it worked after I cd fabric/sampleconfig
2020-12-21 11:23:15.084 CST [orderer.common.server] Main -> INFO 009 Starting orderer: Version: 2.3.0 Commit SHA: dc2e59b3c Go version: go1.15.6 OS/Arch: darwin/amd64
2020-12-21 11:23:15.084 CST [orderer.common.server] Main -> INFO 00a Beginning to serve requests
but when I start the peer using:
export PATH=$(pwd)/build/bin:$PATH
export FABRIC_CFG_PATH=$(pwd)/sampleconfig
export FABRIC_LOGGING_SPEC=chaincode=debug
export CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
peer node start --peer-chaincodedev=true
An error is spotted:
FABRIC_LOGGING_SPEC=chaincode=debug
CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
peer node start --peer-chaincodedev=true
2020-12-21 11:25:13.047 CST [nodeCmd] serve -> INFO 001 Starting peer: Version: 2.3.0 Commit SHA: dc2e59b3c Go version: go1.15.6 OS/Arch: darwin/amd64 Chaincode: Base Docker Label: org.hyperledger.fabric Docker Namespace: hyperledger
2020-12-21 11:25:13.048 CST [peer] getLocalAddress -> INFO 002 Auto-detected peer address: 10.200.83.208:7051
2020-12-21 11:25:13.048 CST [peer] getLocalAddress -> INFO 003 Host is 0.0.0.0 , falling back to auto-detected address: 10.200.83.208:7051 Error: failed to initialize operations subsystem: listen tcp 127.0.0.1:9443: bind: address already in use
this is the error:
Error: failed to initialize operations subsystem: listen tcp 127.0.0.1:9443: bind: address already in use
I checked this issue and it seems this happens because the peer node is using the same port 9443 as the orderer node for the same service. How can I get the two nodes running separately? It seems the docker is running as well.
If you see your error, you can easily follow
Error: failed to initialize operations subsystem: listen tcp 127.0.0.1:9443: bind: address already in use
It is said that the 9443 port is already in use.
It seems that you are not running the orderer and peer as separate containers on the docker-based virtual network, but running on the host pc.
This eventually seems to conflict with two servers requesting one port 9443 on your pc.\
Referring to the configuration below of fabric-2.3/sampleconfig, you can see that each port 9443 is assigned to the server. Assigning one of them to the other port solves this.
fabric-2.3/sampleconfig/orderer.yaml
configuration of orderer
# orderer.yaml
...
Admin:
# host and port for the admin server
ListenAddress: 127.0.0.1:9443
...
fabric-2.3/sampleconfig/core.yaml
configuration of peer
# core.yaml
...
operations:
# host and port for the operations server
# listenAddress: 127.0.0.1:9443
listenAddress: 127.0.0.1:10443
...
This is not a direct answer to the port mapping / collision issue, but we've had great success using the new Kubernetes Test Network as a development platform running on a local system with a virtual Kubernetes cluster running in KIND (Kubernetes in Docker).
In this mode, applications can be developed using the Gateway client (exposed via a port forward or ingress), and smart contracts running As a Service can be launched either in the cluster OR run on the local host OS in a container, binary, or launched in a debugger.
The documentation for the development setup is still sparse, but we'd love to hear feedback on the overall approach, as it offers an exponentially better experience for working with a test network in a development context. In general the process of "port juggling" with Compose is no longer relevant when working on a local Kubernetes cluster. In this mode, you can run services on the host network, instructing peers/orderers/etc. to connect to the remote process running on the host OS.

ActiveMQ login screen is not showing when runs as a Docker image

I have create a activemq docker file and when i start the image i cannot log to the login screen. The url is http://127.0.0.1:8161
here is my docker file you can also see the url in the log.
# Using jdk as base image
FROM openjdk:8-jdk-alpine
# Copy the whole directory of activemq into the image
COPY activemq /opt/activemq
# Set the working directory to the bin folder
WORKDIR /opt/activemq/bin
# Start up the activemq server
ENTRYPOINT ["./activemq","console"]
and here is the log from the console
INFO: Using java '/usr/lib/jvm/java-1.8-openjdk/bin/java'
INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
INFO: Creating pidfile /opt/activemq//data/activemq.pid
Java Runtime: IcedTea 1.8.0_212 /usr/lib/jvm/java-1.8-openjdk/jre
Heap sizes: current=390656k free=386580k max=5779968k
JVM args: -Djava.util.logging.config.file=logging.properties -
Djava.security.auth.login.config=/opt/activemq//conf/login.config -Djava.awt.headless=true -
Djava.io.tmpdir=/opt/activemq//tmp -Dactivemq.classpath=/opt/activemq//conf:/opt/activemq//../lib/: -
Dactivemq.home=/opt/activemq/ -Dactivemq.base=/opt/activemq/ -Dactivemq.conf=/opt/activemq//conf -
Dactivemq.data=/opt/activemq//data
Extensions classpath:
[/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,
/opt/activemq
/lib/extra]
ACTIVEMQ_HOME: /opt/activemq
ACTIVEMQ_BASE: /opt/activemq
ACTIVEMQ_CONF: /opt/activemq/conf
ACTIVEMQ_DATA: /opt/activemq/data
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#6be46e8f: startup date [Mon Nov 23
15:32:26 GMT 2020]; root of context hierarchy
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb]
INFO | KahaDB is version 7
INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started
INFO | Apache ActiveMQ 5.16.0 (localhost, ID:afee6bfb43ba-45805-1606145547047-0:1) is starting
INFO | Listening for connections at: tcp://afee6bfb43ba:61616?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector openwire started
INFO | Listening for connections at: amqp://afee6bfb43ba:5672?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector amqp started
INFO | Listening for connections at: stomp://afee6bfb43ba:61613?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector stomp started
INFO | Listening for connections at: mqtt://afee6bfb43ba:1883?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector mqtt started
INFO | Starting Jetty server
INFO | Creating Jetty connector
WARN | ServletContext#o.e.j.s.ServletContextHandler#ab7395e{/,null,STARTING} has uncovered http
methods for path: /
INFO | Listening for connections at ws://afee6bfb43ba:61614?
maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector ws started
INFO | Apache ActiveMQ 5.16.0 (localhost, ID:afee6bfb43ba-45805-1606145547047-0:1) started
INFO | For help or more information please see: http://activemq.apache.org
INFO | ActiveMQ WebConsole available at http://127.0.0.1:8161/
INFO | ActiveMQ Jolokia REST API available at http://127.0.0.1:8161/api/jolokia/
what have i done wrong ? Thanks
As at ActiveMQ 5.16.0 the jetty endpoint host value was changed from 0.0.0.0 to 127.0.0.1, see AMQ-7007.
To overcome this in my Dockerfile I use CMD ["/bin/sh", "-c", "bin/activemq console -Djetty.host=0.0.0.0"]
Activemq startup done by ENTRYPOINT in your Dockerfile, so CMD ["/bin/sh", "-c", "bin/activemq console -Djetty.host=0.0.0.0"] won't work.
Correct usage with ENTRYPOINT is
ENTRYPOINT ["./activemq","console","-Djetty.host=0.0.0.0"]

ActiveMQ change ports

When I try to start my ActiveMQ broker, I get an address already in use error:
2015-01-17 18:41:32,828 | ERROR | Failed to start Apache ActiveMQ ([localhost, ID:Laptop-44709-1421516492312-0:1], java.io.IOException: Transport Connector could not be registered in JMX: Failed to bind to server socket: amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600 due to: java.net.BindException: Die Adresse wird bereits verwendet)
I have tried to inspect the service running in port 5672 with netstat | grep, but it doesn't show the pid for some reason. So I tried changing the default port for amqp:
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:61617?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
However, when I try sudo /etc/init.d/activemq start, ActiveMQ ignores my config and keeps connecting to the port already in use.
Any ideas why?
I have been setting up ActiveMQ following this guide:
http://servicebus.blogspot.de/2011/02/installing-apache-active-mq-on-ubuntu.html
I faced the problem with the ActiveMQ configs (JMX in particular) when I used a simlink in init.d in Ubuntu. ActiveMQ started work fine after I had replaced the simlink with a script like:
#! /bin/sh
ACTIVEMQ_HOME="/opt/activemq"
case "$1" in
start)
$ACTIVEMQ_HOME/bin/activemq start
;;
stop)
$ACTIVEMQ_HOME/bin/activemq stop
;;
restart)
$ACTIVEMQ_HOME/bin/activemq restart
;;
status)
$ACTIVEMQ_HOME/bin/activemq status
;;
*)
echo "Valid commands: start|stop|restat|status" >&2
;;
esac
exit 0

Resources