I need to configure Couchbase 2.2 to use short hostname.
Currently I am using Couchbase 2.0.1 and in this case the solution was easy:
Set Short hostname in /opt/couchbase/var/lib/couchbase/ip and /opt/couchbase/var/lib/couchbase/ip_start files.
Change extra="-name ns_1#$ip" for extra="-sname ns_1#$ip" in _start() function into /opt/couchbase/bin/couchbase-server. This parameter was used to run erl (-run ns_bootstrap -- $extra)
This steps allows me to configure the node with the short hostname and create the cluster based on these.
In Couchbase 2.2 I can't do that because erl runs using babysitter. I try to configure babysitter to use short hostname but I couldn't make it work...
The servers were deployed in an a private Virtualization environment that only handle short hostname.
Each node has 2 ips, one public and one private. If I run a ping command from itself i get their private IP and I run ping a command from any other node I get their public IP.
For example, if I have one node:
myhost-00 (private IP: 192.168.8.170 public IP: 10.254.171.29)
from itself:
$ ping myhost-00
PING myhost-00 (192.168.8.170) 56(84) bytes of data.
from other node:
$ ping myhost-00
PING myhost-00 (10.254.171.29) 56(84) bytes of data.
Any ideas?
I figured out a workaround:
Firstly, I don't modify any of the Couchbase files.
Secondly, I add a fake domain to my short hostname in each /etc/hosts file. In the file I append the private IP for the current node and the public IP for other nodes with the fake domain.
For example, assuming I have 2 hosts:
myhost-00 (private IP: 192.168.8.170 public IP: 10.254.171.29)
myhost-01 (private IP: 192.168.8.168 public IP: 10.254.171.30)
myhost-00 /etc/hosts file:
...
192.168.8.170 myhost-00.mydomain
10.254.171.30 myhost-01.mydomain
...
myhost-01 /etc/hosts file:
...
10.254.171.29 myhost-00.mydomain
192.168.8.168 myhost-01.mydomain
...
Finally, I create de cluster using the hostnames with the fake domains (myhost-00.mydomain and myhost-01.mydomain)
At this time, Couchbase does not allow the use of short names for the node name. There are ticket updates that discuss and confirm this situation.
For long hostnames, you will find steps to use hostname at http://docs.couchbase.com/couchbase-manual-2.2/#couchbase-getting-started-hostnames and http://docs.couchbase.com/couchbase-manual-2.5/cb-install/#using-hostnames, depending on version. You can leverage hostname when a cluster is created, a node is added to a cluster, or you can change from an IP address to a hostname via a REST API command. See the doc for full details.
Related
I am trying to access my host system from a docker container
have tried all the following instead of 127.0.0.1 and localhost:
gateway.docker.internal,
docker.for.mac.host.internal,
host.docker.internal ,
docker.for.mac.host.internal,
docker.for.mac.localhost,
but none seem to work.
If I run my docker run command with --net=host, I can indeed access localhost however none of my port mappings get exposed and in accessible from outside docker.
I am using Docker version 20.10.5, build 55c4c88
some more info. I am running a piece of software called impervious (a layer on top of the bitcoin lightning network). It needs to connect to my local Polar lightning node on localhost:10001. Here is the config file the tool itself uses(see lnd section):
# Server configurations
server:
enabled: true # enable the GRPC/HTTP/websocket server
grpc_addr: 0.0.0.0:8881 # SET FOR DOCKER
http_addr: 0.0.0.0:8882 # SET FOR DOCKER
# Redis DB configurations
sqlite3:
username: admin
password: supersecretpassword # this will get moved to environment variable or generated dynamically
###### DO NOT EDIT THE BELOW SECTION#####
# Services
service_list:
- service_type: federate
active: true
custom_record_number: 100000
additional_service_data:
- service_type: vpn
active: true
custom_record_number: 200000
additional_service_data:
- service_type: message
active: true
custom_record_number: 400000
additional_service_data:
- service_type: socket
active: true
custom_record_number: 500000
additional_service_data:
- service_type: sign
active: true
custom_record_number: 800000
additional_service_data:
###### DO NOT EDIT THE ABOVE SECTION#####
# Lightning
lightning:
lnd_node:
ip: host.docker.internal
port: 10001 #GRPC port of your LND node
pub_key: 025287d7d6b3ffcfb0a7695b1989ec9a8dcc79688797ac05f886a0a352a43959ce #get your LND pubkey with "lncli getinfo"
tls_cert: /app/lnd/tls.cert # SET FOR DOCKER
admin_macaroon: /app/lnd/admin.macaroon # SET FOR DOCKER
federate:
ttl: 31560000 #Federation auto delete in seconds
imp_id: YOUR_IMP_ID #plain text string of your IMP node name
vpn:
price: 100 #per hour
server_ip: http://host.docker.internal #public IP of your VPN server
server_port: 51820 #port you want to listen on
subnet: 10.0.0.0/24 #subnet you want to give to your clients. .1 == your server IP.
server_pub_key: asdfasdfasdf #get this from your WG public key file
allowed_ips: 0.0.0.0/0 #what subnets clients can reach. Default is entire world.
binary_path: /usr/bin/wg #where your installed the "wg" command.
dns: 8.8.8.8 #set your preferred DNS server here.
socket:
server_ip: 1.1.1.1 #public IP of your socket server
I run impervious using the following docker comand:
docker run -p8881:8881 -p8882:8882 -v /Users/xxx/dev/btc/impervious/config/alice-config-docker.yml:/app/config/config.yml -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/tls.cert:/app/lnd/tls.cert -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/data/chain/bitcoin/regtest/admin.macaroon:/app/lnd/admin.macaroon -it impant/imp-releases:v0.1.4
but it just hangs when it tries to connect to the node at host.docker.internal
Have you tried docker-mac-net-connect?
The problem is related to macOS.Unlike Docker on Linux, Docker for macOS does not expose container networks directly on the macOS host.
You can use host.docker.internal which gives the localhost of the macos.
https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host
The host has a changing IP address (or none if you have no network
access). We recommend that you connect to the special DNS name
host.docker.internal which resolves to the internal IP address used by
the host. This is for development purpose and does not work in a
production environment outside of Docker Desktop.
Mac running the desktop version of docker.
The docker isn't running on the host machine and using a kind of virtual machine that includes Linux kernel. The network of this virtual machine is different from the host machine. To connect from your Mac host to running docker container used a kind of VPN connection:
When you run your docker with --net host switch you connect the container to a virtual machine network instead connect to your host machine network as it's working on Linux.
Then trying to connect to 127.0.0.1 or to localhost isn't allow connections to the running container.
The solution to this issue is to expose needed ports from running container:
docker run -p 8080:8080
If you need to expose all ports from your container you can use -P switch.
For opposite connection use host.docker.internal URL from container.
More documentation about docker desktop for Mac networking
I have the DNS server Unbound in a docker container. This container has the following port mapping in the docker deamon:
0.0.0.0:53->53/tcp, 0.0.0.0:53->53/udp
The docker host has the IP address 192.168.24.5 and a local DHCP server announces the host's IP as the local DNS server. This works fine all over my local network.
The host itself uses this DNS server through the IP 192.168.24.5. That's the address that is put to the host's /etc/resolv.conf. (I know it would not work with docker if there was 127.0.0.1 as the nameserver address.)
I have some other docker containers and they are supposed to use this DNS server as well. The point is, they don't.
What actually happens is this:
Whithin a random container I can ping the host's address as well as the address of the unbound-container. But when I use dig inside a container I get these results:
# dig #172.17.0.6 ...
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 22778
;; flags: qr rd ad; QUERY: 0, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
# dig #192.168.24.5 ...
;; reply from unexpected source: 172.17.0.1#53, expected 192.168.24.5#53
This looks like some internal DNS server intercepts the queries and tries to answer them. That would be fine if it would use the host's DNS server to get an answer, but it doesn't. DNS doesn't work at all in the containers.
Am I doing wrong or is docker doing something it should not ?
The issue is iptables UDP nat for DNS server. You're querying the host IP while it's the docker bridge network's response.
To fix this issue in at least to ways:
Use container IP (DNS container) as DNS resolver if possible.
or
Provide --net=host to your DNS server container and remove port mapping altogether. Then host IP DNS would work as expected.
I'm trying to use distributing programming in Erlang.
But I had a problem, I can't communicate two Erlang's nodes to communicate.
I tried to put the same atom in the "Magical cookies", but it didn't work.
I tried to use command net:ping(node), but reponse was pang (didn't reconigze another node), or used nodes(), to see if my first node see the second node, but it didn't work again.
The first and second node is CentOS in VMWare, using bridge connection in network adaptor.
I entered command ping outside Erlang between VM's and they reconigze each one.
I start the first node, but the second node open process, but can't find the node pong.
(pong#localhost)8> tut17:start_pong().
true
(ping#localhost)5> c(tut17).
{ok,tut17}
(ping#localhost)6> tut17:start_ping(pong#localhost).
<0.55.0>
Thank you!
A similar question here.
The distribution is provided by a daemon called Erlang Port Mapper Daemon. By default it listens on port 4369 so you need to make sure that that port is opened between the nodes. Additionally, each started Erlang VM opens an additional port to communicate with other VMs. You can see those ports with epmd -names:
g#someserv1:~ % epmd -names
epmd: up and running on port 4369 with data:
name hbd at port 22200
You can check if the port is opened by doing telnet to it, e.g.:
g#someserv1:~ % telnet 127.0.0.1 22200
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
^]
Connection closed by foreign host.
You can change the port to the port you want to check, e.g. 4369, and also the IP to the desired IP. Doing ping is not enough because it uses its own ICMP protocol which is different that TCP used by the Erlang distribution to communicate, e.g. ICMP may be allowed but TCP may be blocked.
Edit:
Please follow this guide Distributed Erlang to start an Erlang VM in distributed mode. Then you can use net_adm:ping/1 to connect to it from another node, e.g.:
(hbd#someserv1.somehost.com)17> net_adm:ping('hbd#someserv2.somehost.com').
pong
Only then epmd -names will show the started Erlang VM on the list.
Edit2:
Assume that there are tho hosts, A and B. Each one runs one Erlang VM. epmd -names run on each host shows for example:
Host A:
epmd: up and running on port 4369 with data:
name servA at port 22200
Host B:
epmd: up and running on port 4369 with data:
name servB at port 22300
You need to be able to do:
On Host A:
telnet HostB 4369
telent HostB 22300
On Host B:
telnet HostA 4369
telnet HostA 22200
where HostA and HostB are those hosts' IP addresses (.e.g HostA is IP of Host A, HostB is IP of Host B).
If the telnet works correctly then you should be able to do net_adm:ping/1 from one host to the other, e.g. on Host A you would ping the name of Host B. The name is what the command node(). returns.
You need to make sure you have a node name for your nodes, or they won't be available to connect with. E.g.:
erl -sname somenode#node1
If you're using separate hosts, then you need to make sure that the node names are resolvable to ip addresses somehow. An easy to way to do this is using /etc/hosts.
# Append a similar line to the /etc/hosts file
10.10.10.10 node1
For more helpful answers, you should post what you see in your terminal when you try this.
EDIT
It looks like your shell is auto picking "localhost" as the node name. You can't send messages to another host with the address "localhost". When specifying the name on the shell, try using the # syntax to specify the node name as well:
# On host 1:
erl -sname ping#host1
# On host 2
erl -sname pong#host2
Then edit the host file so host1 and host2 will resolve to the right IP.
Let me first explain what I'm trying to do, as there may be multiple ways to solve this. I have two containers in docker 1.9.0:
node001 (172.17.0.2) (sudo docker run --net=<<bridge or test>> --name=node001 -h node001 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup <<image>>)
node002 (172.17.0.3) (,,)
When I launch them with --net=bridge I get the correct value for SSH_CLIENT when I ssh from one to the other:
[root#node001 ~]# ssh root#172.17.0.3
root#172.17.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.17.0.3 56194 22
[root#node001 ~]# ping -c 1 node002
ping: unknown host node002
In docker 1.8.3 I could also use the hostnames I supply when I start them, in 1.8.3 that last ping statement works!
In docker 1.9.0 I don't see anything being added in /etc/hosts, and the ping statement fails. This is a problem for me. So I tried creating a custom network...
docker network create --driver bridge test
When I launch the two containers with --net=test I get a different value for SSH_CLIENT:
[root#node001 ~]# ssh root#172.18.0.3
root#172.18.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.1 57388 22
[root#node001 ~]# ping -c 1 node002
PING node002 (172.18.0.3) 56(84) bytes of data.
64 bytes from node002 (172.18.0.3): icmp_seq=1 ttl=64 time=0.041 ms
Note that the ip address is not node001's, it seems to represent the docker host itself. The hosts file is correct though, containing:
172.18.0.2 node001
172.18.0.2 node001.test
172.18.0.3 node002
172.18.0.3 node002.test
My current workaround is using docker 1.8.3 with the default bridge network, but I want this to work with future docker versions.
Is there any way I can customize the test network to make it behave similarly to the default bridge network?
Alternatively:
Maybe make the default bridge network write out the /etc/hosts file in docker 1.9.0?
Any help or pointers towards different solutions will be greatly appreciated..
Edit: 21-01-2016
Apparently the problem is fixed in 1.9.1, with bridge in docker 1.8 and with a custom (--net=test) in 1.9.1, now the behaviour is correct:
[root#node001 tmp]# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.3 52162 22
Retried in 1.9.0 to see if I wasn't crazy, and yeah there the problem occurs:
[root#node001 tmp]# ip route
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
[root#node002 ~]# env|grep SSH_CLI
SSH_CLIENT=172.18.0.1 53734 22
So after remove/stop/start-ing the instances the IP-addresses were not exactly the same, but it can be easily seen that the ssh_client source ip is not correct in the last code block. Thanks #sourcejedi for making me re-check.
Firstly, I don't think it's possible to change any settings on the default network, i.e. to write /etc/hosts. You apparently can't delete the default networks, so you can't recreate them with different options.
Secondly
Docker is careful that its host-wide iptables rules fully expose containers to each other’s raw IP addresses, so connections from one container to another should always appear to be originating from the first container’s own IP address. docs.docker.com
I tried reproducing your issue with the random containers I've been playing with. Running wireshark on the bridge interface for the network, I didn't see my ping packets. From this I conclude my containers are indeed talking directly to each other; the host was not doing routing and NAT.
You need to check the routes on your client container ip route. Do you have a route for 172.18.0.2/16? If you only have a default route, it could try to send everything through the docker host. And it might get confused and do masquerading as if it was talking with the outside world.
This might happen if you're running some network configuration in your privileged container. I don't know what's happening if you're just booting it with bash though.
I have been trying to setup a geo replication with glusterfs servers. Everything worked as expected in my test environment, on my staging environment, but then i tried the production and got stuck.
Let say I have
gluster fs server is on public ip 1.1.1.1
gluster fs slave is on public 2.2.2.2, but this IP is on interface eth1
The eth0 on gluster fs slave server is 192.168.0.1.
So when i start the command on 1.1.1.1 (firewall and ssh keys are set properly)
gluster volume geo-replication vol0 2.2.2.2::vol0 create push-pem
I get an error.
Unable to fetch slave volume details. Please check the slave cluster and slave volume.
geo-replication command failed
The error is not that important in this case, the problem is the slave IP address
2015-03-16T11:41:08.101229+00:00 xxx kernel: TCP LOGDROP: IN= OUT=eth0 SRC=1.1.1.1 DST=192.168.0.1 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=24243 DF PROTO=TCP SPT=1015 DPT=24007 WINDOW=14600 RES=0x00 SYN URGP=0
As you can see in the firewall drop log above, the port 24007 of the slave gluster daemon is advertised on private IP of the interface eth0 on slave server and should be the IP of the eth1 private IP. So master cannot connect and will time out
Is there a way to force gluster server to advertise interface eth1 or bind to it only?
I use cfengine and ansible to push configuration, so binding to Interface could be a better solution than IP, but whatever solution will do.
Thank you in advance.
I've encountered this issue but in a different context.
I was trying to geo-replicate two nodes which were both behind a NAT (AWS instances in different regions).
When the master connects to the slave via the public IP to check for volume compatability/size and other details, it retrieves the hostname of the slave, which usually resolves to something that only has meaning in that remote region.
Then it uses that hostname to dial back to the slave when later setting up the session, which fails, as that hostname resolves to a private IP in a different region.
My workaround for the issue was to use hostnames when creating the volumes, probing for peers, and establishing geo replication, and then add a /etc/hosts entry mapping slaves hostname which usually resolves to its private IP to its public IP, rather than it's private IP.
This gets you to the point where you establish a session, but I haven't had any luck actually getting it to sync, as it uses the wrong IP somewhere long the way again.
Edit:
I've actually managed to get it running by adding /etc/hosts hacks on both sides.
GlusterFS has no notion of the network layer. Check your routes. If the next-hop for your geo-replication slave is on eth1, then gluster will open a port on that interface for the slave IP address.
Also make sure your firewall is configured to forward geo-replication traffic on this port.