Could anyone explain what type of connections are between the nodes?
Are they anyhow encrypted? I cannot find anything in the official documentation.
Update:
http://developer.couchbase.com/documentation/server/current/security/security-comm-encryption.html
Depending on the version you are running and the deployment topology you have chosen with services, Couchbase Server has a number of connections between nodes. You can find the list here for the ports we use for internal communication between nodes under "node to node";
http://developer.couchbase.com/documentation/server/4.5/install/install-ports.html
Couchbase Server does not encrypt communication between nodes today. You can use other solutions like IPSec to do that. Couchbase Server does encrypt data access, web console and cluster to cluster communication with XDCR.
Related
I have a server and I am using Ubuntu 20.04, nginx , mosquitto and node-red and docker , let's call the website http://mywebsite.com. The problem that I am facing that I have created a client lets call it client1 in docker so the URL will be http://mywebsite.com/client1
and I want to establish an MQTT connection via mosquitto and I'm sending the data on topic test
The problem that on node red node of MQTT when I write the IP address of my mosquitto container it works
But if I change the IP address 192.144.0.5 with mywebsite.com/client1 I can't connect to mosquitto and I can't send or receive any form of data
any idea on how to solve this problem
OK, you are going to have several problems here.
You can not do path based proxying with MQTT. If you want to have multiple MQTT brokers (1 per client) bound to a single public facing domain/IP address then they are all going to have to run on separate ports (other than the default 1883).
Nginx can do MQTT protocol proxying (e.g. like this), so you can use this to expose the different ports and forward them to the separate instances of mosquitto, but even if you had a different hostname (all pointing at the same IP address) nginx has no way to know which host name was used because there is no equivalent to the HOST HTTP header to direct it. If you were to use MQTT with TLS then you may be able to get it to work with SNI, but I've never seen anybody do that yet (possible docs for SNI based routing here) It works, explanation about how to do it here.
If you use MQTT over Websockets then you should be able to use hostname based routing.
Path based proxying for Node-RED currently doesn't work properly if you enable admin authentication, because the admin auth tokens are currently stored in browser local storage and only scoped to the hostname, not the hostname + path. This will mean that a client will only ever be able to log into one instance at a time.
You can work round this by using host based proxying, e.g. http://client1.mywebsite.com
A fix for this is on the backlog for Node-RED, probably (no promises) to be looked at after version 1.2.0 ships
The new version of Docker (version 1.10) includes a DNS server to pass alias information from other hosts on the same network. There used to be hosts file entries for resolving linked containers (or containers on the same network). I am wondering if it is possible to use this embedded DNS server on an overlay network? I have looked in the documentation (and in issues) and cannot find information about this.
So the way the new embedded DNS "server" works is that it isn't a formal server. It's just an embedded listener for traffic to 127.0.0.11:53 (udp of course). When docker sees that query traffic on the container's network interface, it steps in with its embedded DNS server and replies with any answers it might have to the query. The documentation has some options you can set to affect how this DNS server behaves, but since it only listens for query traffic on that localhost address, there is no way to expose this to an overlay network in the way that you are thinking. However this seems to be a moving target, and I have seen this question before in IRC, so it may one day be the case that this embedded DNS server at least becomes pluggable, or possibly exposable in the way you would like.
Is it possible to know what range of external IP the DataFlow workers on GCP are using? The goal is to set-up some kind of IP filtering on an external service, so that only our DataFlow jobs running on GCP can access the service.
The best solution would be to upgrade so that you can use SSL or other mechanisms of strong authentication.
You can use the --network= option to control the GCE Network that the worker VMs are assigned to. Take a look at the GCE docs on networking for details on how to set up a VPN (like the comment from Elmar suggested). You could also look at setting up a single machine in the network with a static, external IP and using it as a proxy for the other VMs in the network.
This is not a use pattern we have tested, so there may be issues with latency or throughput of traffic through the proxy/VPN. You will likely need to be careful to only send your traffic through this proxy so that you don’t accidentally hijack the traffic used by each worker to communicate with the Dataflow service.
I created a datastax cassandra Enterprise cluster with 2 cassandra nodes, 2 search nodes and 2 Analytics nodes.
Everything seems to work correctly EXCEPT, I can't connect to it from outside. If I'm on node0 server I can run the cassandra-cli and connect to the cassandra nodes on port 9160 but when I tried to connect using datastax-rails gem, I get "No live servers" I also tried datastax devCenter which tries to connect to the native port 9042 but also didn't work. I'm really puzzled, any help is appreciated.
So after some digging I found some issues
1. Port 9160 is connected and I can connect to it from telnet node0_ip 9160
2. when I run rake ds:migrate, I get No live servers in node0_ip
3. I tried to connect to 'cassandra' gem instead from IRB and tried
a. client = Cassandra.new('example', 'node0_ip:9160')
b. client.insert(:users, "5", {'screen_name' => "buttonscat4"})
I got a similar error with ThriftClient::NoServersAvailable: No live servers but this time with all the IPs of all the nodes in the cluster
4. I tried adding "client.disable_node_auto_discovery!" and I was able to connect and add stuff using 'cassandra' Gem.
5. I also found on https://github.com/cassandra-rb/cassandra/issues/171 that I need to change your server to bind on a non-loopback address but have no idea what does that mean
The question now is how
Sounds like you need to open up your EC2 security group to the outside on port 9160. Specifically the security group that your node0 is using.
You can find more information about them here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html
I was getting the same error and got this to work by using disable_node_auto_discovery!
You can see in the documentation for this method that it says "This is primarily helpful when the cassandra cluster is communicating internally on a different ip address than what you are using to connect. A prime example of this would be when using EC2 to host a cluster. Typically, the cluster would be communicating over the local ip addresses issued by Amazon, but any clients connecting from outside EC2 would need to use the public ip."
http://rdoc.info/github/cassandra-rb/cassandra/master/Cassandra:disable_node_auto_discovery!
I need to secure the replication data stream between two Neo4J nodes (eg. using SSL or TLS). Both are running in embedded mode in two JBoss instances.
Is it possible and how can I do that ?
Thanks
AFAIK Neo4j replication is not encrypted by itself. The most easy way would be connecting the cluster members using a VPN (e.g. using openvpn) and configure Neo4j to use the virtual network interface provided by the VPN.
An alternative might be stunnel.
Update:
there is a nice blog post on using openvpn for encrypting Neo4j cluster replication by John Russell. Please note that this uses Neo4j <= 1.8, in Neo4j 1.9.x there is no Zookeeper any more.