connection problem with cassandra - connection

I m new to cassandra.I m trying to connect to cassandra bt couldnot connect.
the steps i m following are
1.start the server with command -
/root/Documents/apache-cassandra-0.6.6/bin/cassandra -f
2.on another terminal i m giving command
/root/Documents/apache-cassandra-0.6.6/bin/cassandra-cli
which is saying welcome to cassandra cli
3.then i m connecting too cassandra by giving the command
/root/Documents/apache-cassandra-0.6.6/bin/cassandra-cli
but i m getting exception as "Exception connecting to 10.10.10.142/9160 - java.net.NoRouteToHostException: No route to host
"
can anyone help me why i m getting such exception.

This has "nothing" to do with cassandra.
The documentation (regarding NoRouteToHostException) states that:
"Signals that an error occurred while attempting to connect a socket to a remote address and port. Typically, the remote host cannot be reached because of an intervening firewall, or if an intermediate router is down. "

Another option to consider is to add the port 7199 to the firewall or just to test it to see if you can access a 2 node system is to turn off the firewall in linux using " sudo service firewalld stop"
[dse#orion conf]$ dsetool status mars
DC: Cassandra Workload: Cassandra Graph: no
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Effective-Ownership VNodes Rack Health [0,1]
UN 10.0.0.165 250.03 KiB 100.00% 1 rack1 0.20
UN 10.0.0.20 656.65 KiB 100.00% 256 rack1 0.40

Related

Error accessing Scylladb cluster outside docker container

I'm running Scylladb locally in a docker container and I want to access the cluster outside the docker container. That's when I'm getting the following error: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers')
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.17.0.2 776 KB 256 ? ad698c75-a465-4deb-a92c-0b667e82a84f rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.SimpleSnitch
DynamicEndPointSnitch: disabled
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
443048b2-c1fe-395e-accd-5ae9b6828464: [172.17.0.2]
I have no problem accessing the cluster using cqlsh on port 9042:
Connected to at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Now I'm trying to access the cluster from my fastapi app that is outside the docker container.
from cassandra.cluster import Cluster
cluster = Cluster(['172.17.0.2'])
session = cluster.connect('Test Cluster')
And here's the Error that I'm getting:
raise NoHostAvailable("Unable to connect to any servers", errors)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'172.17.0.2:9042': OSError(51, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Network is unreachable")})
with a little bit of tinkering, it's possible to achieve a connection to the Scylla running in a container outside of the container for local development.
I've tried on M1 Mac with docker desktop:
Run scylla container with couple of new parameters[src]:
--listen-address 0.0.0.0 for simplification as we are spawning Scylla inside the container to allow connection to the container from any network
--broadcast-rpc-address 127.0.0.1 required if --listen-address set to 0.0.0.0. We are going to port forward 9042 from container to host (local) machine, so this is an IP where it will be acessible.
The final command to spawn the container is:
$ docker run --rm -ti \
-p 127.0.0.1:9042:9042 \
scylladb/scylla \
--smp 1 \
--listen-address 0.0.0.0 \
--broadcast-rpc-address 127.0.0.1
The -p 127.0.0.1:9042:9042 is to make port 9042 accessible on host (local) machine.
Install pip3 install scylla-driver as it has support of darwin/arm64 architecture.
Write a simple python script:
# so74265199.py
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
# Select from a table that is available without keyspace
res = session.execute('SELECT * FROM system.versions')
print(res.one())
Run your script
$ python3 so74265199.py
Row(key='local', build_id='71178cf6db7021896cd8251751b78b3d9e3afa8d', build_mode='release', version='5.0.5-0.20221009.5a97a1060')
Disclaimer: I'm not an expert in Scylla's configuration, so feel free to point out a better approach.

schema registry docker from confluent

I want to use the schema registry docker (image owned by confluent) with my open-source Kafka I installed locally on my PC.
I am using the following command to run the image :
docker run -p 8081:8081 \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=PLAINTEXT://127.0.0.1:9092 \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest
but I am getting the following connection errors:
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
I have Kafka installed on my localhost.
Any idea to solve this, please?
I have Kafka installed on my localhost
As commented, localhost is unclear when you're actually using multiple machines (one physical and at least one virtual)
You need to use host.docker.internal:9092
https://docs.docker.com/docker-for-windows/networking/ (removed because host is not windows)
On a Linux host, you need to use host networking mode
https://docs.docker.com/network/host/
Although, realistically, running Kafka in a container would be simpler for connecting the two

Monitor Mininet OpenFLow Traffic in WireShark with RYU Controller

I am using RYU controller for SDN Setup. I want to monitor basic openflow handshake messages but i have failed to do so..
Here are the steps i do after install of mininet, wireshark and ryu.
./bin/ryu-manager --verbose ryu/app/simple_switch_13.py
step 2: start virtual network
sudo mn --top single,3 --man --controller remote --switch ovsk,Protocols=OpenFLow13
Now no traffic shows up in my wireshark.. I am using wireshark version 1.12 which has an openflow dissector installed in it.
When i use capture loop it shows the request and reply packets, but i want to see "Feature Request" From ryu controller in wireshark
Here is what i did:
Be sure "openvswitch-testcontroller" is down:
yavuz#ubuntu:/tmp$ service --status-all | grep openv
[ + ] openvswitch-switch
[ - ] openvswitch-testcontroller
Run ryu application:
yavuz#ubuntu:~/ryu$ pwd
/home/yavuz/ryu
yavuz#ubuntu:~/ryu$ sudo ryu-manager --verbose ryu/app/example_switch_13.py
lzma module is not available
Registered VCS backend: git
Registered VCS backend: hg
Registered VCS backend: svn
Registered VCS backend: bzr
loading app ryu/app/example_switch_13.py
Before mininet run tcpdump for lo (not eth0 or like that)
sudo tcpdump -i lo -w ryu-local.cap
Run mininet:
yavuz#ubuntu:/tmp$ sudo mn --topo single,3 --controller=remote --mac
*** Creating network
*** Adding controller
Connecting to remote controller at 127.0.0.1:6653
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
c0
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet> h1 ping h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=8.38 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.228 ms
Stop the trace and open it.
Hope this helps.
In short => you can't.
Feature request/replyis part of openflow not part of IP stack. So, its embedded in the packets of TCP/IP model.

Connecting local cassandra cluster with docker container

I have set of microservices running as docker container. One microservice say A wants to connect to cassnadra running locally on my laptop. in order to do so i have below configurations
snippet from yaml file of service A
cassandra:
hosts: [127.0.0.1]
keyspace: "My keyspace"
protocol_version: 3
ports: 9042
In other side i ran cassandra by calling ./bin/cassandra . and then i connected to cqlsh locally whose output is as below
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.6 | CQL spec 3.4.0 | Native protocol v4]
Use HELP for help.
now when my container comes up and try to connect to this running cassandra hosted on my machine it says as says connection refused . please see the trace below
File "cassandra/cluster.py", line 2076, in cassandra.cluster.ControlConnection._reconnect_internal (cassandra/cluster.c:36914)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'127.0.0.1': ConnectionRefusedError(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
[start] application exit with code 1, killing container
more info
I am using apache-cassandra-3.0.6.
Please advise. Thanks
As Shibashis has mentioned, probably you cannot reach the host via docker container with 127.0.0.1
Please find the IP represented as HOST.
How to get the IP address of the docker host from inside a docker container
Start the cassandra instance by changing the conf\cassandra.yaml
listen_address
rpc_address
to the recongnized HOST IP.
Hope it helps!

ActiveMQ change ports

When I try to start my ActiveMQ broker, I get an address already in use error:
2015-01-17 18:41:32,828 | ERROR | Failed to start Apache ActiveMQ ([localhost, ID:Laptop-44709-1421516492312-0:1], java.io.IOException: Transport Connector could not be registered in JMX: Failed to bind to server socket: amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600 due to: java.net.BindException: Die Adresse wird bereits verwendet)
I have tried to inspect the service running in port 5672 with netstat | grep, but it doesn't show the pid for some reason. So I tried changing the default port for amqp:
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:61617?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
However, when I try sudo /etc/init.d/activemq start, ActiveMQ ignores my config and keeps connecting to the port already in use.
Any ideas why?
I have been setting up ActiveMQ following this guide:
http://servicebus.blogspot.de/2011/02/installing-apache-active-mq-on-ubuntu.html
I faced the problem with the ActiveMQ configs (JMX in particular) when I used a simlink in init.d in Ubuntu. ActiveMQ started work fine after I had replaced the simlink with a script like:
#! /bin/sh
ACTIVEMQ_HOME="/opt/activemq"
case "$1" in
start)
$ACTIVEMQ_HOME/bin/activemq start
;;
stop)
$ACTIVEMQ_HOME/bin/activemq stop
;;
restart)
$ACTIVEMQ_HOME/bin/activemq restart
;;
status)
$ACTIVEMQ_HOME/bin/activemq status
;;
*)
echo "Valid commands: start|stop|restat|status" >&2
;;
esac
exit 0

Resources